Application of Artificial Intelligence in Radiological Imaging Analysis and Diagnosis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 12700

Special Issue Editor

Special Issue Information

Dear Colleagues,

The application of Artificial Intelligence (AI) in radiological imaging has significantly transformed the field of medical diagnostics, enabling faster and more accurate analyses, streamlining radiology workflows, assisting radiologists in making critical decisions, and improving patient care. This Special Issue aims to gather groundbreaking research and innovative applications that leverage AI to advance radiological imaging analysis and diagnosis.

We welcome submissions exploring topics including, but not limited to, the following:

  1. AI-Enhanced Radiological Image Analysis: Use of novel AI algorithms and methodologies for image segmentation, object detection, feature extraction, and disease classification in various radiological modalities, including X-ray, MRI, CT, ultrasound, and PET.
  2. Intelligent Decision Support Systems: Development and validation of AI-based decision support systems to aid radiologists in detecting abnormalities, diagnosing diseases, and providing personalized treatment recommendations.
  3. Natural Language Processing (NLP) for Radiological Reports: Application of NLP techniques to extract, analyze, and interpret information from radiological reports, enabling better communication and the integration of data into patient care.
  4. Transformer Machine Learning in Radiology: Applications of transformer-based models in medical imaging, including transformer-based segmentation, transfer learning, and anomaly detection.
  5. Generative AI: Utilization of Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and other generative models to generate synthetic medical images, improve data augmentation, and enhance training datasets.
  6. Large Language Models in Medical Imaging and Diagnosis: Utilizing state-of-the-art large language models to advance radiological diagnosis, disease prediction, and prognosis assessment.

Prof. Dr. Tim Duong
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • radiological imaging
  • medical diagnostics
  • intelligent decision support systems
  • natural language processing
  • machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 14345 KiB  
Article
Deep Learning-Based Joint Effusion Classification in Adult Knee Radiographs: A Multi-Center Prospective Study
by Hyeyeon Won, Hye Sang Lee, Daemyung Youn, Doohyun Park, Taejoon Eo, Wooju Kim and Dosik Hwang
Diagnostics 2024, 14(17), 1900; https://doi.org/10.3390/diagnostics14171900 - 29 Aug 2024
Viewed by 926
Abstract
Knee effusion, a common and important indicator of joint diseases such as osteoarthritis, is typically more discernible on magnetic resonance imaging (MRI) scans compared to radiographs. However, the use of radiographs for the early detection of knee effusion remains promising due to their [...] Read more.
Knee effusion, a common and important indicator of joint diseases such as osteoarthritis, is typically more discernible on magnetic resonance imaging (MRI) scans compared to radiographs. However, the use of radiographs for the early detection of knee effusion remains promising due to their cost-effectiveness and accessibility. This multi-center prospective study collected a total of 1413 radiographs from four hospitals between February 2022 to March 2023, of which 1281 were analyzed after exclusions. To automatically detect knee effusion on radiographs, we utilized a state-of-the-art (SOTA) deep learning-based classification model with a novel preprocessing technique to optimize images for diagnosing knee effusion. The diagnostic performance of the proposed method was significantly higher than that of the baseline model, achieving an area under the receiver operating characteristic curve (AUC) of 0.892, accuracy of 0.803, sensitivity of 0.820, and specificity of 0.785. Moreover, the proposed method significantly outperformed two non-orthopedic physicians. Coupled with an explainable artificial intelligence method for visualization, this approach not only improved diagnostic performance but also interpretability, highlighting areas of effusion. These results demonstrate that the proposed method enables the early and accurate classification of knee effusions on radiographs, thereby reducing healthcare costs and improving patient outcomes through timely interventions. Full article
Show Figures

Figure 1

13 pages, 15255 KiB  
Article
Three-Stage Framework for Accurate Pediatric Chest X-ray Diagnosis Using Self-Supervision and Transfer Learning on Small Datasets
by Yufeng Zhang, Joseph Kohne, Emily Wittrup and Kayvan Najarian
Diagnostics 2024, 14(15), 1634; https://doi.org/10.3390/diagnostics14151634 - 29 Jul 2024
Cited by 1 | Viewed by 910
Abstract
Pediatric respiratory disease diagnosis and subsequent treatment require accurate and interpretable analysis. A chest X-ray is the most cost-effective and rapid method for identifying and monitoring various thoracic diseases in children. Recent developments in self-supervised and transfer learning have shown their potential in [...] Read more.
Pediatric respiratory disease diagnosis and subsequent treatment require accurate and interpretable analysis. A chest X-ray is the most cost-effective and rapid method for identifying and monitoring various thoracic diseases in children. Recent developments in self-supervised and transfer learning have shown their potential in medical imaging, including chest X-ray areas. In this article, we propose a three-stage framework with knowledge transfer from adult chest X-rays to aid the diagnosis and interpretation of pediatric thorax diseases. We conducted comprehensive experiments with different pre-training and fine-tuning strategies to develop transformer or convolutional neural network models and then evaluate them qualitatively and quantitatively. The ViT-Base/16 model, fine-tuned with the CheXpert dataset, a large chest X-ray dataset, emerged as the most effective, achieving a mean AUC of 0.761 (95% CI: 0.759–0.763) across six disease categories and demonstrating a high sensitivity (average 0.639) and specificity (average 0.683), which are indicative of its strong discriminative ability. The baseline models, ViT-Small/16 and ViT-Base/16, when directly trained on the Pediatric CXR dataset, only achieved mean AUC scores of 0.646 (95% CI: 0.641–0.651) and 0.654 (95% CI: 0.648–0.660), respectively. Qualitatively, our model excels in localizing diseased regions, outperforming models pre-trained on ImageNet and other fine-tuning approaches, thus providing superior explanations. The source code is available online and the data can be obtained from PhysioNet. Full article
Show Figures

Figure 1

19 pages, 3176 KiB  
Article
Development of Clinical Radiomics-Based Models to Predict Survival Outcome in Pancreatic Ductal Adenocarcinoma: A Multicenter Retrospective Study
by Ayoub Mokhtari, Roberto Casale, Zohaib Salahuddin, Zelda Paquier, Thomas Guiot, Henry C. Woodruff, Philippe Lambin, Jean-Luc Van Laethem, Alain Hendlisz and Maria Antonietta Bali
Diagnostics 2024, 14(7), 712; https://doi.org/10.3390/diagnostics14070712 - 28 Mar 2024
Cited by 1 | Viewed by 1404
Abstract
Purpose. This multicenter retrospective study aims to identify reliable clinical and radiomic features to build machine learning models that predict progression-free survival (PFS) and overall survival (OS) in pancreatic ductal adenocarcinoma (PDAC) patients. Methods. Between 2010 and 2020 pre-treatment contrast-enhanced CT scans of [...] Read more.
Purpose. This multicenter retrospective study aims to identify reliable clinical and radiomic features to build machine learning models that predict progression-free survival (PFS) and overall survival (OS) in pancreatic ductal adenocarcinoma (PDAC) patients. Methods. Between 2010 and 2020 pre-treatment contrast-enhanced CT scans of 287 pathology-confirmed PDAC patients from two sites of the Hopital Universitaire de Bruxelles (HUB) and from 47 hospitals within the HUB network were retrospectively analysed. Demographic, clinical, and survival data were also collected. Gross tumour volume (GTV) and non-tumoral pancreas (RPV) were semi-manually segmented and radiomics features were extracted. Patients from two HUB sites comprised the training dataset, while those from the remaining 47 hospitals of the HUB network constituted the testing dataset. A three-step method was used for feature selection. Based on the GradientBoostingSurvivalAnalysis classifier, different machine learning models were trained and tested to predict OS and PFS. Model performances were assessed using the C-index and Kaplan–Meier curves. SHAP analysis was applied to allow for post hoc interpretability. Results. A total of 107 radiomics features were extracted from each of the GTV and RPV. Fourteen subgroups of features were selected: clinical, GTV, RPV, clinical & GTV, clinical & GTV & RPV, GTV-volume and RPV-volume both for OS and PFS. Subsequently, 14 Gradient Boosting Survival Analysis models were trained and tested. In the testing dataset, the clinical & GTV model demonstrated the highest performance for OS (C-index: 0.72) among all other models, while for PFS, the clinical model exhibited a superior performance (C-index: 0.70). Conclusions. An integrated approach, combining clinical and radiomics features, excels in predicting OS, whereas clinical features demonstrate strong performance in PFS prediction. Full article
Show Figures

Figure 1

12 pages, 1791 KiB  
Article
ChatGPT’s Accuracy on Magnetic Resonance Imaging Basics: Characteristics and Limitations Depending on the Question Type
by Kyu-Hong Lee and Ro-Woon Lee
Diagnostics 2024, 14(2), 171; https://doi.org/10.3390/diagnostics14020171 - 12 Jan 2024
Cited by 2 | Viewed by 2624
Abstract
Our study aimed to assess the accuracy and limitations of ChatGPT in the domain of MRI, focused on evaluating ChatGPT’s performance in answering simple knowledge questions and specialized multiple-choice questions related to MRI. A two-step approach was used to evaluate ChatGPT. In the [...] Read more.
Our study aimed to assess the accuracy and limitations of ChatGPT in the domain of MRI, focused on evaluating ChatGPT’s performance in answering simple knowledge questions and specialized multiple-choice questions related to MRI. A two-step approach was used to evaluate ChatGPT. In the first step, 50 simple MRI-related questions were asked, and ChatGPT’s answers were categorized as correct, partially correct, or incorrect by independent researchers. In the second step, 75 multiple-choice questions covering various MRI topics were posed, and the answers were similarly categorized. The study utilized Cohen’s kappa coefficient for assessing interobserver agreement. ChatGPT demonstrated high accuracy in answering straightforward MRI questions, with over 85% classified as correct. However, its performance varied significantly across multiple-choice questions, with accuracy rates ranging from 40% to 66.7%, depending on the topic. This indicated a notable gap in its ability to handle more complex, specialized questions requiring deeper understanding and context. In conclusion, this study critically evaluates the accuracy of ChatGPT in addressing questions related to Magnetic Resonance Imaging (MRI), highlighting its potential and limitations in the healthcare sector, particularly in radiology. Our findings demonstrate that ChatGPT, while proficient in responding to straightforward MRI-related questions, exhibits variability in its ability to accurately answer complex multiple-choice questions that require more profound, specialized knowledge of MRI. This discrepancy underscores the nuanced role AI can play in medical education and healthcare decision-making, necessitating a balanced approach to its application. Full article
Show Figures

Figure 1

12 pages, 1107 KiB  
Article
Improved Cervical Lymph Node Characterization among Patients with Head and Neck Squamous Cell Carcinoma Using MR Texture Analysis Compared to Traditional FDG-PET/MR Features Alone
by Eric K. van Staalduinen, Robert Matthews, Adam Khan, Isha Punn, Renee F. Cattell, Haifang Li, Ana Franceschi, Ghassan J. Samara, Lukasz Czerwonka, Lev Bangiyev and Tim Q. Duong
Diagnostics 2024, 14(1), 71; https://doi.org/10.3390/diagnostics14010071 - 28 Dec 2023
Cited by 1 | Viewed by 1271
Abstract
Accurate differentiation of benign and malignant cervical lymph nodes is important for prognosis and treatment planning in patients with head and neck squamous cell carcinoma. We evaluated the diagnostic performance of magnetic resonance image (MRI) texture analysis and traditional 18F-deoxyglucose positron emission tomography [...] Read more.
Accurate differentiation of benign and malignant cervical lymph nodes is important for prognosis and treatment planning in patients with head and neck squamous cell carcinoma. We evaluated the diagnostic performance of magnetic resonance image (MRI) texture analysis and traditional 18F-deoxyglucose positron emission tomography (FDG-PET) features. This retrospective study included 21 patients with head and neck squamous cell carcinoma. We used texture analysis of MRI and FDG-PET features to evaluate 109 histologically confirmed cervical lymph nodes (41 metastatic, 68 benign). Predictive models were evaluated using area under the curve (AUC). Significant differences were observed between benign and malignant cervical lymph nodes for 36 of 41 texture features (p < 0.05). A combination of 22 MRI texture features discriminated benign and malignant nodal disease with AUC, sensitivity, and specificity of 0.952, 92.7%, and 86.7%, which was comparable to maximum short-axis diameter, lymph node morphology, and maximum standard uptake value (SUVmax). The addition of MRI texture features to traditional FDG-PET features differentiated these groups with the greatest AUC, sensitivity, and specificity (0.989, 97.5%, and 94.1%). The addition of the MRI texture feature to lymph node morphology improved nodal assessment specificity from 70.6% to 88.2% among FDG-PET indeterminate lymph nodes. Texture features are useful for differentiating benign and malignant cervical lymph nodes in patients with head and neck squamous cell carcinoma. Lymph node morphology and SUVmax remain accurate tools. Specificity is improved by the addition of MRI texture features among FDG-PET indeterminate lymph nodes. This approach is useful for differentiating benign and malignant cervical lymph nodes. Full article
Show Figures

Figure 1

14 pages, 2303 KiB  
Article
Machine Learning Predicts Decompression Levels for Lumbar Spinal Stenosis Using Canal Radiomic Features from Computed Tomography Myelography
by Guoxin Fan, Dongdong Wang, Yufeng Li, Zhipeng Xu, Hong Wang, Huaqing Liu and Xiang Liao
Diagnostics 2024, 14(1), 53; https://doi.org/10.3390/diagnostics14010053 - 26 Dec 2023
Cited by 1 | Viewed by 1746
Abstract
Background: The accurate preoperative identification of decompression levels is crucial for the success of surgery in patients with multi-level lumbar spinal stenosis (LSS). The objective of this study was to develop machine learning (ML) classifiers that can predict decompression levels using computed tomography [...] Read more.
Background: The accurate preoperative identification of decompression levels is crucial for the success of surgery in patients with multi-level lumbar spinal stenosis (LSS). The objective of this study was to develop machine learning (ML) classifiers that can predict decompression levels using computed tomography myelography (CTM) data from LSS patients. Methods: A total of 1095 lumbar levels from 219 patients were included in this study. The bony spinal canal in CTM images was manually delineated, and radiomic features were extracted. The extracted data were randomly divided into training and testing datasets (8:2). Six feature selection methods combined with 12 ML algorithms were employed, resulting in a total of 72 ML classifiers. The main evaluation indicator for all classifiers was the area under the curve of the receiver operating characteristic (ROC-AUC), with the precision–recall AUC (PR-AUC) serving as the secondary indicator. The prediction outcome of ML classifiers was decompression level or not. Results: The embedding linear support vector (embeddingLSVC) was the optimal feature selection method. The feature importance analysis revealed the top 5 important features of the 15 radiomic predictors, which included 2 texture features, 2 first-order intensity features, and 1 shape feature. Except for shape features, these features might be eye-discernible but hardly quantified. The top two ML classifiers were embeddingLSVC combined with support vector machine (EmbeddingLSVC_SVM) and embeddingLSVC combined with gradient boosting (EmbeddingLSVC_GradientBoost). These classifiers achieved ROC-AUCs over 0.90 and PR-AUCs over 0.80 in independent testing among the 72 classifiers. Further comparisons indicated that EmbeddingLSVC_SVM appeared to be the optimal classifier, demonstrating superior discrimination ability, slight advantages in the Brier scores on the calibration curve, and Net benefits on the Decision Curve Analysis. Conclusions: ML successfully extracted valuable and interpretable radiomic features from the spinal canal using CTM images, and accurately predicted decompression levels for LSS patients. The EmbeddingLSVC_SVM classifier has the potential to assist surgical decision making in clinical practice, as it showed high discrimination, advantageous calibration, and competitive utility in selecting decompression levels in LSS patients using canal radiomic features from CTM. Full article
Show Figures

Figure 1

12 pages, 447 KiB  
Article
Using Artificial Intelligence to Stratify Normal versus Abnormal Chest X-rays: External Validation of a Deep Learning Algorithm at East Kent Hospitals University NHS Foundation Trust
by Sarah R. Blake, Neelanjan Das, Manoj Tadepalli, Bhargava Reddy, Anshul Singh, Rohitashva Agrawal, Subhankar Chattoraj, Dhruv Shah and Preetham Putha
Diagnostics 2023, 13(22), 3408; https://doi.org/10.3390/diagnostics13223408 - 9 Nov 2023
Cited by 3 | Viewed by 2880
Abstract
Background: The chest radiograph (CXR) is the most frequently performed radiological examination worldwide. The increasing volume of CXRs performed in hospitals causes reporting backlogs and increased waiting times for patients, potentially compromising timely clinical intervention and patient safety. Implementing computer-aided detection (CAD) artificial [...] Read more.
Background: The chest radiograph (CXR) is the most frequently performed radiological examination worldwide. The increasing volume of CXRs performed in hospitals causes reporting backlogs and increased waiting times for patients, potentially compromising timely clinical intervention and patient safety. Implementing computer-aided detection (CAD) artificial intelligence (AI) algorithms capable of accurate and rapid CXR reporting could help address such limitations. A novel use for AI reporting is the classification of CXRs as ‘abnormal’ or ‘normal’. This classification could help optimize resource allocation and aid radiologists in managing their time efficiently. Methods: qXR is a CE-marked computer-aided detection (CAD) software trained on over 4.4 million CXRs. In this retrospective cross-sectional pre-deployment study, we evaluated the performance of qXR in stratifying normal and abnormal CXRs. We analyzed 1040 CXRs from various referral sources, including general practices (GP), Accident and Emergency (A&E) departments, and inpatient (IP) and outpatient (OP) settings at East Kent Hospitals University NHS Foundation Trust. The ground truth for the CXRs was established by assessing the agreement between two senior radiologists. Results: The CAD software had a sensitivity of 99.7% and a specificity of 67.4%. The sub-group analysis showed no statistically significant difference in performance across healthcare settings, age, gender, and X-ray manufacturer. Conclusions: The study showed that qXR can accurately stratify CXRs as normal versus abnormal, potentially reducing reporting backlogs and resulting in early patient intervention, which may result in better patient outcomes. Full article
Show Figures

Figure 1

Back to TopTop