Recent Advances in Artificial Intelligence-Based Medical Image Analysis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (30 June 2024) | Viewed by 33169

Special Issue Editor

1. Leidos Inc, Tewksbury, MA 01876, USA
2. Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
Interests: SPECT; brain imaging; microwave medical imaging; image reconstruction algorithms

Special Issue Information

Dear Colleagues,

This Special Issue will explore the intersection of artificial intelligence and medical imaging techniques and the future impact of these rapidly changing technologies. All manuscripts on this topic, including original research and review articles, will be welcomed. Of particular interest are papers that focus on CT, PET, SPECT, MRI, microwave, and photoacoustic image reconstruction, and image analysis using common neural networks or generative adversarial networks (GANs).  

Dr. Wenyi Shao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • image reconstruction
  • medical image analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

14 pages, 644 KiB  
Article
An Analytical Study on the Utility of RGB and Multispectral Imagery with Band Selection for Automated Tumor Grading
by Suchithra Kunhoth and Somaya Al-Maadeed
Diagnostics 2024, 14(15), 1625; https://doi.org/10.3390/diagnostics14151625 - 27 Jul 2024
Viewed by 849
Abstract
The implementation of tumor grading tasks with image processing and machine learning techniques has progressed immensely over the past several years. Multispectral imaging enabled us to capture the sample as a set of image bands corresponding to different wavelengths in the visible and [...] Read more.
The implementation of tumor grading tasks with image processing and machine learning techniques has progressed immensely over the past several years. Multispectral imaging enabled us to capture the sample as a set of image bands corresponding to different wavelengths in the visible and infrared spectrums. The higher dimensional image data can be well exploited to deliver a range of discriminative features to support the tumor grading application. This paper compares the classification accuracy of RGB and multispectral images, using a case study on colorectal tumor grading with the QU-Al Ahli Dataset (dataset I). Rotation-invariant local phase quantization (LPQ) features with an SVM classifier resulted in 80% accuracy for the RGB images compared to 86% accuracy with the multispectral images in dataset I. However, the higher dimensionality elevates the processing time. We propose a band-selection strategy using mutual information between image bands. This process eliminates redundant bands and increases classification accuracy. The results show that our band-selection method provides better results than normal RGB and multispectral methods. The band-selection algorithm was also tested on another colorectal tumor dataset, the Texas University Dataset (dataset II), to further validate the results. The proposed method demonstrates an accuracy of more than 94% with 10 bands, compared to using the whole set of 16 multispectral bands. Our research emphasizes the advantages of multispectral imaging over the RGB imaging approach and proposes a band-selection method to address the higher computational demands of multispectral imaging. Full article
Show Figures

Figure 1

23 pages, 5891 KiB  
Article
GETNet: Group Normalization Shuffle and Enhanced Channel Self-Attention Network Based on VT-UNet for Brain Tumor Segmentation
by Bin Guo, Ning Cao, Ruihao Zhang and Peng Yang
Diagnostics 2024, 14(12), 1257; https://doi.org/10.3390/diagnostics14121257 - 14 Jun 2024
Cited by 1 | Viewed by 883
Abstract
Currently, brain tumors are extremely harmful and prevalent. Deep learning technologies, including CNNs, UNet, and Transformer, have been applied in brain tumor segmentation for many years and have achieved some success. However, traditional CNNs and UNet capture insufficient global information, and Transformer cannot [...] Read more.
Currently, brain tumors are extremely harmful and prevalent. Deep learning technologies, including CNNs, UNet, and Transformer, have been applied in brain tumor segmentation for many years and have achieved some success. However, traditional CNNs and UNet capture insufficient global information, and Transformer cannot provide sufficient local information. Fusing the global information from Transformer with the local information of convolutions is an important step toward improving brain tumor segmentation. We propose the Group Normalization Shuffle and Enhanced Channel Self-Attention Network (GETNet), a network combining the pure Transformer structure with convolution operations based on VT-UNet, which considers both global and local information. The network includes the proposed group normalization shuffle block (GNS) and enhanced channel self-attention block (ECSA). The GNS is used after the VT Encoder Block and before the downsampling block to improve information extraction. An ECSA module is added to the bottleneck layer to utilize the characteristics of the detailed features in the bottom layer effectively. We also conducted experiments on the BraTS2021 dataset to demonstrate the performance of our network. The Dice coefficient (Dice) score results show that the values for the regions of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) were 91.77, 86.03, and 83.64, respectively. The results show that the proposed model achieves state-of-the-art performance compared with more than eleven benchmarks. Full article
Show Figures

Figure 1

15 pages, 17085 KiB  
Article
Development, Application and Utility of a Machine Learning Approach for Melanoma and Non-Melanoma Lesion Classification Using Counting Box Fractal Dimension
by Pablo Romero-Morelos, Elizabeth Herrera-López and Beatriz González-Yebra
Diagnostics 2024, 14(11), 1132; https://doi.org/10.3390/diagnostics14111132 - 29 May 2024
Viewed by 1034
Abstract
The diagnosis and identification of melanoma are not always accurate, even for experienced dermatologists. Histopathology continues to be the gold standard, assessing specific parameters such as the Breslow index. However, it remains invasive and may lack effectiveness. Therefore, leveraging mathematical modeling and informatics [...] Read more.
The diagnosis and identification of melanoma are not always accurate, even for experienced dermatologists. Histopathology continues to be the gold standard, assessing specific parameters such as the Breslow index. However, it remains invasive and may lack effectiveness. Therefore, leveraging mathematical modeling and informatics has been a pursuit of diagnostic methods favoring early detection. Fractality, a mathematical parameter quantifying complexity and irregularity, has proven useful in melanoma diagnosis. Nonetheless, no studies have implemented this metric to feed artificial intelligence algorithms for the automatic classification of dermatological lesions, including melanoma. Hence, this study aimed to determine the combined utility of fractal dimension and unsupervised low-computational-requirements machine learning models in classifying melanoma and non-melanoma lesions. We analyzed 39,270 dermatological lesions obtained from the International Skin Imaging Collaboration. Box-counting fractal dimensions were calculated for these lesions. Fractal values were used to implement classification methods by unsupervised machine learning based on principal component analysis and iterated K-means (100 iterations). A clear separation was observed, using only fractal dimension values, between benign or malignant lesions (sensibility 72.4% and specificity 50.1%) and melanoma or non-melanoma lesions (sensibility 72.8% and specificity 50%) and subsequently, the classification quality based on the machine learning model was ≈80% for both benign and malignant or melanoma and non-melanoma lesions. However, the grouping of metastatic melanoma versus non-metastatic melanoma was less effective, probably due to the small sample size included in MM lesions. Nevertheless, we could suggest a decision algorithm based on fractal dimension for dermatological lesion discrimination. On the other hand, it was also determined that the fractal dimension is sufficient to generate unsupervised artificial intelligence models that allow for a more efficient classification of dermatological lesions. Full article
Show Figures

Figure 1

27 pages, 14796 KiB  
Article
Deep Learning-Based Classification and Semantic Segmentation of Lung Tuberculosis Lesions in Chest X-ray Images
by Chih-Ying Ou, I-Yen Chen, Hsuan-Ting Chang, Chuan-Yi Wei, Dian-Yu Li, Yen-Kai Chen and Chuan-Yu Chang
Diagnostics 2024, 14(9), 952; https://doi.org/10.3390/diagnostics14090952 - 30 Apr 2024
Viewed by 1854
Abstract
We present a deep learning (DL) network-based approach for detecting and semantically segmenting two specific types of tuberculosis (TB) lesions in chest X-ray (CXR) images. In the proposed method, we use a basic U-Net model and its enhanced versions to detect, classify, and [...] Read more.
We present a deep learning (DL) network-based approach for detecting and semantically segmenting two specific types of tuberculosis (TB) lesions in chest X-ray (CXR) images. In the proposed method, we use a basic U-Net model and its enhanced versions to detect, classify, and segment TB lesions in CXR images. The model architectures used in this study are U-Net, Attention U-Net, U-Net++, Attention U-Net++, and pyramid spatial pooling (PSP) Attention U-Net++, which are optimized and compared based on the test results of each model to find the best parameters. Finally, we use four ensemble approaches which combine the top five models to further improve lesion classification and segmentation results. In the training stage, we use data augmentation and preprocessing methods to increase the number and strength of lesion features in CXR images, respectively. Our dataset consists of 110 training, 14 validation, and 98 test images. The experimental results show that the proposed ensemble model achieves a maximum mean intersection-over-union (MIoU) of 0.70, a mean precision rate of 0.88, a mean recall rate of 0.75, a mean F1-score of 0.81, and an accuracy of 1.0, which are all better than those of only using a single-network model. The proposed method can be used by clinicians as a diagnostic tool assisting in the examination of TB lesions in CXR images. Full article
Show Figures

Figure 1

17 pages, 2693 KiB  
Article
A New Method of Artificial-Intelligence-Based Automatic Identification of Lymphovascular Invasion in Urothelial Carcinomas
by Bogdan Ceachi, Mirela Cioplea, Petronel Mustatea, Julian Gerald Dcruz, Sabina Zurac, Victor Cauni, Cristiana Popp, Cristian Mogodici, Liana Sticlaru, Alexandra Cioroianu, Mihai Busca, Oana Stefan, Irina Tudor, Carmen Dumitru, Alexandra Vilaia, Alexandra Oprisan, Alexandra Bastian and Luciana Nichita
Diagnostics 2024, 14(4), 432; https://doi.org/10.3390/diagnostics14040432 - 16 Feb 2024
Cited by 2 | Viewed by 1916
Abstract
The presence of lymphovascular invasion (LVI) in urothelial carcinoma (UC) is a poor prognostic finding. This is difficult to identify on routine hematoxylin–eosin (H&E)-stained slides, but considering the costs and time required for examination, immunohistochemical stains for the endothelium are not the recommended [...] Read more.
The presence of lymphovascular invasion (LVI) in urothelial carcinoma (UC) is a poor prognostic finding. This is difficult to identify on routine hematoxylin–eosin (H&E)-stained slides, but considering the costs and time required for examination, immunohistochemical stains for the endothelium are not the recommended diagnostic protocol. We developed an AI-based automated method for LVI identification on H&E-stained slides. We selected two separate groups of UC patients with transurethral resection specimens. Group A had 105 patients (100 with UC; 5 with cystitis); group B had 55 patients (all with high-grade UC; D2-40 and CD34 immunohistochemical stains performed on each block). All the group A slides and 52 H&E cases from group B showing LVI using immunohistochemistry were scanned using an Aperio GT450 automatic scanner. We performed a pixel-per-pixel semantic segmentation of selected areas, and we trained InternImage to identify several classes. The DiceCoefficient and Intersection-over-Union scores for LVI detection using our method were 0.77 and 0.52, respectively. The pathologists’ H&E-based evaluation in group B revealed 89.65% specificity, 42.30% sensitivity, 67.27% accuracy, and an F1 score of 0.55, which is much lower than the algorithm’s DCC of 0.77. Our model outlines LVI on H&E-stained-slides more effectively than human examiners; thus, it proves a valuable tool for pathologists. Full article
Show Figures

Figure 1

17 pages, 3137 KiB  
Article
Accurate Intervertebral Disc Segmentation Approach Based on Deep Learning
by Yu-Kai Cheng, Chih-Lung Lin, Yi-Chi Huang, Guo-Shiang Lin, Zhen-You Lian and Cheng-Hung Chuang
Diagnostics 2024, 14(2), 191; https://doi.org/10.3390/diagnostics14020191 - 16 Jan 2024
Cited by 1 | Viewed by 1392
Abstract
Automatically segmenting specific tissues or structures from medical images is a straightforward task for deep learning models. However, identifying a few specific objects from a group of similar targets can be a challenging task. This study focuses on the segmentation of certain specific [...] Read more.
Automatically segmenting specific tissues or structures from medical images is a straightforward task for deep learning models. However, identifying a few specific objects from a group of similar targets can be a challenging task. This study focuses on the segmentation of certain specific intervertebral discs from lateral spine images acquired from an MRI scanner. In this research, an approach is proposed that utilizes MultiResUNet models and employs saliency maps for target intervertebral disc segmentation. First, a sub-image cropping method is used to separate the target discs. This method uses MultiResUNet to predict the saliency maps of target discs and crop sub-images for easier segmentation. Then, MultiResUNet is used to segment the target discs in these sub-images. The distance maps of the segmented discs are then calculated and combined with their original image for data augmentation to predict the remaining target discs. The training set and test set use 2674 and 308 MRI images, respectively. Experimental results demonstrate that the proposed method significantly enhances segmentation accuracy to about 98%. The performance of this approach highlights its effectiveness in segmenting specific intervertebral discs from closely similar discs. Full article
Show Figures

Figure 1

19 pages, 6793 KiB  
Article
Automatic Diabetic Foot Ulcer Recognition Using Multi-Level Thermographic Image Data
by Ikramullah Khosa, Awais Raza, Mohd Anjum, Waseem Ahmad and Sana Shahab
Diagnostics 2023, 13(16), 2637; https://doi.org/10.3390/diagnostics13162637 - 10 Aug 2023
Cited by 3 | Viewed by 3065
Abstract
Lower extremity diabetic foot ulcers (DFUs) are a severe consequence of diabetes mellitus (DM). It has been estimated that people with diabetes have a 15% to 25% lifetime risk of acquiring DFUs which leads to the risk of lower limb amputations up to [...] Read more.
Lower extremity diabetic foot ulcers (DFUs) are a severe consequence of diabetes mellitus (DM). It has been estimated that people with diabetes have a 15% to 25% lifetime risk of acquiring DFUs which leads to the risk of lower limb amputations up to 85% due to poor diagnosis and treatment. Diabetic foot develops planter ulcers where thermography is used to detect the changes in the planter temperature. In this study, publicly available thermographic image data including both control group and diabetic group patients are used. Thermograms at image level as well as patch level are utilized for DFU detection. For DFU recognition, several machine-learning-based classification approaches are employed with hand-crafted features. Moreover, a couple of convolutional neural network models including ResNet50 and DenseNet121 are evaluated for DFU recognition. Finally, a CNN-based custom-developed model is proposed for the recognition task. The results are produced using image-level data, patch-level data, and image–patch combination data. The proposed CNN-based model outperformed the utilized models as well as the state-of-the-art models in terms of the AUC and accuracy. Moreover, the recognition accuracy for both the machine-learning and deep-learning approaches was higher for the image-level thermogram data in comparison to the patch-level or combination of image–patch thermograms. Full article
Show Figures

Figure 1

21 pages, 2337 KiB  
Article
“Quo Vadis Diagnosis”: Application of Informatics in Early Detection of Pneumothorax
by V. Dhilip Kumar, P. Rajesh, Oana Geman, Maria Daniela Craciun, Muhammad Arif and Roxana Filip
Diagnostics 2023, 13(7), 1305; https://doi.org/10.3390/diagnostics13071305 - 30 Mar 2023
Cited by 2 | Viewed by 1559
Abstract
A pneumothorax is a condition that occurs in the lung region when air enters the pleural space—the area between the lung and chest wall—causing the lung to collapse and making it difficult to breathe. This can happen spontaneously or as a result of [...] Read more.
A pneumothorax is a condition that occurs in the lung region when air enters the pleural space—the area between the lung and chest wall—causing the lung to collapse and making it difficult to breathe. This can happen spontaneously or as a result of an injury. The symptoms of a pneumothorax may include chest pain, shortness of breath, and rapid breathing. Although chest X-rays are commonly used to detect a pneumothorax, locating the affected area visually in X-ray images can be time-consuming and prone to errors. Existing computer technology for detecting this disease from X-rays is limited by three major issues, including class disparity, which causes overfitting, difficulty in detecting dark portions of the images, and vanishing gradient. To address these issues, we propose an ensemble deep learning model called PneumoNet, which uses synthetic images from data augmentation to address the class disparity issue and a segmentation system to identify dark areas. Finally, the issue of the vanishing gradient, which becomes very small during back propagation, can be addressed by hyperparameter optimization techniques that prevent the model from slowly converging and poorly performing. Our model achieved an accuracy of 98.41% on the Society for Imaging Informatics in Medicine pneumothorax dataset, outperforming other deep learning models and reducing the computation complexities in detecting the disease. Full article
Show Figures

Figure 1

21 pages, 2933 KiB  
Article
Prediction of the as Low as Diagnostically Acceptable CT Dose for Identification of the Inferior Alveolar Canal Using 3D Convolutional Neural Networks with Multi-Balancing Strategies
by Asma’a Al-Ekrish, Syed Azhar Hussain, Hebah ElGibreen, Rana Almurshed, Luluah Alhusain, Romed Hörmann and Gerlig Widmann
Diagnostics 2023, 13(7), 1220; https://doi.org/10.3390/diagnostics13071220 - 23 Mar 2023
Viewed by 1854
Abstract
Ionizing radiation is necessary for diagnostic imaging and deciding the right radiation dose is extremely critical to obtain a decent quality image. However, increasing the dosage to improve the image quality has risks due to the potential harm from ionizing radiation. Thus, finding [...] Read more.
Ionizing radiation is necessary for diagnostic imaging and deciding the right radiation dose is extremely critical to obtain a decent quality image. However, increasing the dosage to improve the image quality has risks due to the potential harm from ionizing radiation. Thus, finding the optimal as low as diagnostically acceptable (ALADA) dosage is an open research problem that has yet to be tackled using artificial intelligence (AI) methods. This paper proposes a new multi-balancing 3D convolutional neural network methodology to build 3D multidetector computed tomography (MDCT) datasets and develop a 3D classifier model that can work properly with 3D CT scan images and balance itself over the heavy unbalanced multi-classes. The proposed models were exhaustively investigated through eighteen empirical experiments and three re-runs for clinical expert examination. As a result, it was possible to confirm that the proposed models improved the performance by an accuracy of 5% to 10% when compared to the baseline method. Furthermore, the resulting models were found to be consistent, and thus possibly applicable to different MDCT examinations and reconstruction techniques. The outcome of this paper can help radiologists to predict the suitability of CT dosages across different CT hardware devices and reconstruction algorithms. Moreover, the developed model is suitable for clinical application where the right dose needs to be predicted from numerous MDCT examinations using a certain MDCT device and reconstruction technique. Full article
Show Figures

Figure 1

10 pages, 1595 KiB  
Article
Auto-Detection of Motion Artifacts on CT Pulmonary Angiograms with a Physician-Trained AI Algorithm
by Giridhar Dasegowda, Bernardo C. Bizzo, Parisa Kaviani, Lina Karout, Shadi Ebrahimian, Subba R. Digumarthy, Nir Neumark, James M. Hillis, Mannudeep K. Kalra and Keith J. Dreyer
Diagnostics 2023, 13(4), 778; https://doi.org/10.3390/diagnostics13040778 - 18 Feb 2023
Cited by 1 | Viewed by 2177
Abstract
Purpose: Motion-impaired CT images can result in limited or suboptimal diagnostic interpretation (with missed or miscalled lesions) and patient recall. We trained and tested an artificial intelligence (AI) model for identifying substantial motion artifacts on CT pulmonary angiography (CTPA) that have a [...] Read more.
Purpose: Motion-impaired CT images can result in limited or suboptimal diagnostic interpretation (with missed or miscalled lesions) and patient recall. We trained and tested an artificial intelligence (AI) model for identifying substantial motion artifacts on CT pulmonary angiography (CTPA) that have a negative impact on diagnostic interpretation. Methods: With IRB approval and HIPAA compliance, we queried our multicenter radiology report database (mPower, Nuance) for CTPA reports between July 2015 and March 2022 for the following terms: “motion artifacts”, “respiratory motion”, “technically inadequate”, and “suboptimal” or “limited exam”. All CTPA reports were from two quaternary (Site A, n = 335; B, n = 259) and a community (C, n = 199) healthcare sites. A thoracic radiologist reviewed CT images of all positive hits for motion artifacts (present or absent) and their severity (no diagnostic effect or major diagnostic impairment). Coronal multiplanar images from 793 CTPA exams were de-identified and exported offline into an AI model building prototype (Cognex Vision Pro, Cognex Corporation) to train an AI model to perform two-class classification (“motion” or “no motion”) with data from the three sites (70% training dataset, n = 554; 30% validation dataset, n = 239). Separately, data from Site A and Site C were used for training and validating; testing was performed on the Site B CTPA exams. A five-fold repeated cross-validation was performed to evaluate the model performance with accuracy and receiver operating characteristics analysis (ROC). Results: Among the CTPA images from 793 patients (mean age 63 ± 17 years; 391 males, 402 females), 372 had no motion artifacts, and 421 had substantial motion artifacts. The statistics for the average performance of the AI model after five-fold repeated cross-validation for the two-class classification included 94% sensitivity, 91% specificity, 93% accuracy, and 0.93 area under the ROC curve (AUC: 95% CI 0.89–0.97). Conclusion: The AI model used in this study can successfully identify CTPA exams with diagnostic interpretation limiting motion artifacts in multicenter training and test datasets. Clinical relevance: The AI model used in the study can help alert technologists about the presence of substantial motion artifacts on CTPA, where a repeat image acquisition can help salvage diagnostic information. Full article
Show Figures

Figure 1

17 pages, 3170 KiB  
Article
Distinctions between Choroidal Neovascularization and Age Macular Degeneration in Ocular Disease Predictions via Multi-Size Kernels ξcho-Weighted Median Patterns
by Alex Liew, Sos Agaian and Samir Benbelkacem
Diagnostics 2023, 13(4), 729; https://doi.org/10.3390/diagnostics13040729 - 14 Feb 2023
Cited by 10 | Viewed by 2856
Abstract
Age-related macular degeneration is a visual disorder caused by abnormalities in a part of the eye’s retina and is a leading source of blindness. The correct detection, precise location, classification, and diagnosis of choroidal neovascularization (CNV) may be challenging if the lesion is [...] Read more.
Age-related macular degeneration is a visual disorder caused by abnormalities in a part of the eye’s retina and is a leading source of blindness. The correct detection, precise location, classification, and diagnosis of choroidal neovascularization (CNV) may be challenging if the lesion is small or if Optical Coherence Tomography (OCT) images are degraded by projection and motion. This paper aims to develop an automated quantification and classification system for CNV in neovascular age-related macular degeneration using OCT angiography images. OCT angiography is a non-invasive imaging tool that visualizes retinal and choroidal physiological and pathological vascularization. The presented system is based on new retinal layers in the OCT image-specific macular diseases feature extractor, including Multi-Size Kernels ξcho-Weighted Median Patterns (MSKξMP). Computer simulations show that the proposed method: (i) outperforms current state-of-the-art methods, including deep learning techniques; and (ii) achieves an overall accuracy of 99% using ten-fold cross-validation on the Duke University dataset and over 96% on the noisy Noor Eye Hospital dataset. In addition, MSKξMP performs well in binary eye disease classifications and is more accurate than recent works in image texture descriptors. Full article
Show Figures

Figure 1

18 pages, 1436 KiB  
Article
Intracranial Hemorrhage Detection Using Parallel Deep Convolutional Models and Boosting Mechanism
by Muhammad Asif, Munam Ali Shah, Hasan Ali Khattak, Shafaq Mussadiq, Ejaz Ahmed, Emad Abouel Nasr and Hafiz Tayyab Rauf
Diagnostics 2023, 13(4), 652; https://doi.org/10.3390/diagnostics13040652 - 9 Feb 2023
Cited by 12 | Viewed by 2615
Abstract
Intracranial hemorrhage (ICH) can lead to death or disability, which requires immediate action from radiologists. Due to the heavy workload, less experienced staff, and the complexity of subtle hemorrhages, a more intelligent and automated system is necessary to detect ICH. In literature, many [...] Read more.
Intracranial hemorrhage (ICH) can lead to death or disability, which requires immediate action from radiologists. Due to the heavy workload, less experienced staff, and the complexity of subtle hemorrhages, a more intelligent and automated system is necessary to detect ICH. In literature, many artificial-intelligence-based methods are proposed. However, they are less accurate for ICH detection and subtype classification. Therefore, in this paper, we present a new methodology to improve the detection and subtype classification of ICH based on two parallel paths and a boosting technique. The first path employs the architecture of ResNet101-V2 to extract potential features from windowed slices, whereas Inception-V4 captures significant spatial information in the second path. Afterwards, the detection and subtype classification of ICH is performed by the light gradient boosting machine (LGBM) using the outputs of ResNet101-V2 and Inception-V4. Thus, the combined solution, known as ResNet101-V2, Inception-V4, and LGBM (Res-Inc-LGBM), is trained and tested over the brain computed tomography (CT) scans of CQ500 and Radiological Society of North America (RSNA) datasets. The experimental results state that the proposed solution efficiently obtains 97.7% accuracy, 96.5% sensitivity, and 97.4% F1 score using the RSNA dataset. Moreover, the proposed Res-Inc-LGBM outperforms the standard benchmarks for the detection and subtype classification of ICH regarding the accuracy, sensitivity, and F1 score. The results prove the significance of the proposed solution for its real-time application. Full article
Show Figures

Figure 1

19 pages, 3618 KiB  
Article
Brain Stroke Classification via Machine Learning Algorithms Trained with a Linearized Scattering Operator
by Valeria Mariano, Jorge A. Tobon Vasquez, Mario R. Casu and Francesca Vipiana
Diagnostics 2023, 13(1), 23; https://doi.org/10.3390/diagnostics13010023 - 21 Dec 2022
Cited by 15 | Viewed by 2433
Abstract
This paper proposes an efficient and fast method to create large datasets for machine learning algorithms applied to brain stroke classification via microwave imaging systems. The proposed method is based on the distorted Born approximation and linearization of the scattering operator, in order [...] Read more.
This paper proposes an efficient and fast method to create large datasets for machine learning algorithms applied to brain stroke classification via microwave imaging systems. The proposed method is based on the distorted Born approximation and linearization of the scattering operator, in order to minimize the time to generate the large datasets needed to train the machine learning algorithms. The method is then applied to a microwave imaging system, which consists of twenty-four antennas conformal to the upper part of the head, realized with a 3D anthropomorphic multi-tissue model. Each antenna acts as a transmitter and receiver, and the working frequency is 1 GHz. The data are elaborated with three machine learning algorithms: support vector machine, multilayer perceptron, and k-nearest neighbours, comparing their performance. All classifiers can identify the presence or absence of the stroke, the kind of stroke (haemorrhagic or ischemic), and its position within the brain. The trained algorithms were tested with datasets generated via full-wave simulations of the overall system, considering also slightly modified antennas and limiting the data acquisition to amplitude only. The obtained results are promising for a possible real-time brain stroke classification. Full article
Show Figures

Figure 1

15 pages, 9846 KiB  
Article
A 3-D Full Convolution Electromagnetic Reconstruction Neural Network (3-D FCERNN) for Fast Super-Resolution Electromagnetic Inversion of Human Brain
by Yu Cheng, Li-Ye Xiao, Le-Yi Zhao, Ronghan Hong and Qing Huo Liu
Diagnostics 2022, 12(11), 2786; https://doi.org/10.3390/diagnostics12112786 - 14 Nov 2022
Cited by 1 | Viewed by 1567
Abstract
Three-dimensional (3-D) super-resolution microwave imaging of human brain is a typical electromagnetic (EM) inverse scattering problem with high contrast. It is a challenge for the traditional schemes based on deterministic or stochastic inversion methods to obtain high contrast and high resolution, and they [...] Read more.
Three-dimensional (3-D) super-resolution microwave imaging of human brain is a typical electromagnetic (EM) inverse scattering problem with high contrast. It is a challenge for the traditional schemes based on deterministic or stochastic inversion methods to obtain high contrast and high resolution, and they require huge computational time. In this work, a dual-module 3-D EM inversion scheme based on deep neural network is proposed. The proposed scheme can solve the inverse scattering problems with high contrast and super-resolution in real time and reduce a huge computational cost. In the EM inversion module, a 3-D full convolution EM reconstruction neural network (3-D FCERNN) is proposed to nonlinearly map the measured scattered field to a preliminary image of 3-D electrical parameter distribution of the human brain. The proposed 3-D FCERNN is completely composed of convolution layers, which can greatly save training cost and improve model generalization compared with fully connected networks. Then, the image enhancement module employs a U-Net to further improve the imaging quality from the results of 3-D FCERNN. In addition, a dataset generation strategy based on the human brain features is proposed, which can solve the difficulty of human brain dataset collection and high training cost. The proposed scheme has been confirmed to be effective and accurate in reconstructing the distribution of 3-D super-resolution electrical parameters distribution of human brain through noise-free and noisy examples, while the traditional EM inversion method is difficult to converge in the case of high contrast and strong scatterers. Compared with our previous work, the training of FCERNN is faster and can significantly decrease computational resources. Full article
Show Figures

Figure 1

12 pages, 4835 KiB  
Article
Generation of Digital Brain Phantom for Machine Learning Application of Dopamine Transporter Radionuclide Imaging
by Wenyi Shao, Kevin H. Leung, Jingyan Xu, Jennifer M. Coughlin, Martin G. Pomper and Yong Du
Diagnostics 2022, 12(8), 1945; https://doi.org/10.3390/diagnostics12081945 - 12 Aug 2022
Cited by 4 | Viewed by 2464
Abstract
While machine learning (ML) methods may significantly improve image quality for SPECT imaging for the diagnosis and monitoring of Parkinson’s disease (PD), they require a large amount of data for training. It is often difficult to collect a large population of patient data [...] Read more.
While machine learning (ML) methods may significantly improve image quality for SPECT imaging for the diagnosis and monitoring of Parkinson’s disease (PD), they require a large amount of data for training. It is often difficult to collect a large population of patient data to support the ML research, and the ground truth of lesion is also unknown. This paper leverages a generative adversarial network (GAN) to generate digital brain phantoms for training ML-based PD SPECT algorithms. A total of 594 PET 3D brain models from 155 patients (113 male and 42 female) were reviewed and 1597 2D slices containing the full or a portion of the striatum were selected. Corresponding attenuation maps were also generated based on these images. The data were then used to develop a GAN for generating 2D brain phantoms, where each phantom consisted of a radioactivity image and the corresponding attenuation map. Statistical methods including histogram, Fréchet distance, and structural similarity were used to evaluate the generator based on 10,000 generated phantoms. When the generated phantoms and training dataset were both passed to the discriminator, similar normal distributions were obtained, which indicated the discriminator was unable to distinguish the generated phantoms from the training datasets. The generated digital phantoms can be used for 2D SPECT simulation and serve as the ground truth to develop ML-based reconstruction algorithms. The cumulated experience from this work also laid the foundation for building a 3D GAN for the same application. Full article
Show Figures

Figure 1

Other

Jump to: Research

11 pages, 3169 KiB  
Brief Report
An Anatomical Template for the Normalization of Medical Images of Adult Human Hands
by Jay Hegdé, Nicholas J. Tustison, William T. Parker, Fallon Branch, Nathan Yanasak and Lorie A. Stumpo
Diagnostics 2023, 13(12), 2010; https://doi.org/10.3390/diagnostics13122010 - 9 Jun 2023
Cited by 1 | Viewed by 1303
Abstract
During medical image analysis, it is often useful to align (or ‘normalize’) a given image of a given body part to a representative standard (or ‘template’) of that body part. The impact that brain templates have had on the analysis of brain images [...] Read more.
During medical image analysis, it is often useful to align (or ‘normalize’) a given image of a given body part to a representative standard (or ‘template’) of that body part. The impact that brain templates have had on the analysis of brain images highlights the importance of templates in general. However, templates for human hands do not exist. Image normalization is especially important for hand images because hands, by design, readily change shape during various tasks. Here we report the construction of an anatomical template for healthy adult human hands. To do this, we used 27 anatomically representative T1-weighted magnetic resonance (MR) images of either hand from 21 demographically representative healthy adult subjects (13 females and 8 males). We used the open-source, cross-platform ANTs (Advanced Normalization Tools) medical image analysis software framework, to preprocess the MR images. The template was constructed using the ANTs standard multivariate template construction workflow. The resulting template image preserved all the essential anatomical features of the hand, including all the individual bones, muscles, tendons, ligaments, as well as the main branches of the median nerve and radial, ulnar, and palmar metacarpal arteries. Furthermore, the image quality of the template was significantly higher than that of the underlying individual hand images as measured by two independent canonical metrics of image quality. Full article
Show Figures

Figure 1

Back to TopTop