Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (15 October 2022) | Viewed by 42757

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
Interests: machine learning; artificial intelligence; medical image analysis; image informatics; multimodal data analysis; data science
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
Interests: machine learning; artificial intelligence; computer vision; medical image analysis; data science; biomaterial-associated infections; music therapy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cardiothoracic and pulmonary diseases are a significant cause of mortality and morbidity worldwide, and additional focus has been brought about by the COVID-19 pandemic. According to the recent American Lung Association report, more than 228,000 people will be diagnosed with lung cancer in the United States alone this year, with the rate of new cases varying by state. Further, heart disease is indiscriminate in ethnic and racial origin in causing mortality. Additionally, infectious diseases, such as tuberculosis (TB) often coupled with human immunodeficiency virus (HIV) comorbidity, are found with drug-resistant strains that greatly impact treatment pathways and survival rates.

Screening, diagnosis, and management of such cardiopulmonary diseases has become difficult owing to the limited availability of diagnostic tools and experts, particularly in low and middle income regions. Early screening and accurate diagnosis and staging of cardiopulmonary diseases could play a crucial role in treatment and care, and potentially aid in reducing mortality. 

Radiographic imaging methods such as computed tomography (CT), chest-X-rays (CXRs), and echo ultrasound are widely used in screening and diagnosis. Research on using image-based artificial intelligence (AI) and machine learning (ML) methods can help in rapid assessment, serve as surrogates for expert assessment, and reduce variability in human performance. 

Through this Special Issue, “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”, we aim to highlight primary research studies and literature reviews focusing on novel AI/ML methods and their application in image-based screening, diagnosis, and clinical management of cardiopulmonary diseases. We hope that the Special Issue will help convey the state of the art in AI that exhibits the potential to make a significant contribution to an important global health challenge.

We invite leading researchers to submit their previously unpublished and novel research in this area.

Dr. Sameer Antani
Dr. Sivaramakrishnan Rajaraman
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Image-based screening and diagnostics
  • Computer-aided diagnosis
  • Machine learning and deep learning
  • Cardiothoracic and pulmonary diseases
  • Radiographic imaging
  • Computed tomography (CT)
  • Chest-X-rays (CXRs)
  • Echo ultrasound

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

7 pages, 230 KiB  
Editorial
Editorial on Special Issue “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”
by Sivaramakrishnan Rajaraman and Sameer Antani
Diagnostics 2022, 12(11), 2615; https://doi.org/10.3390/diagnostics12112615 - 27 Oct 2022
Cited by 1 | Viewed by 1216
Abstract
Cardiopulmonary diseases are a significant cause of mortality and morbidity worldwide [...] Full article

Research

Jump to: Editorial, Other

20 pages, 16828 KiB  
Article
Image Embeddings Extracted from CNNs Outperform Other Transfer Learning Approaches in Classification of Chest Radiographs
by Noemi Gozzi, Edoardo Giacomello, Martina Sollini, Margarita Kirienko, Angela Ammirabile, Pierluca Lanzi, Daniele Loiacono and Arturo Chiti
Diagnostics 2022, 12(9), 2084; https://doi.org/10.3390/diagnostics12092084 - 28 Aug 2022
Cited by 8 | Viewed by 2641
Abstract
To identify the best transfer learning approach for the identification of the most frequent abnormalities on chest radiographs (CXRs), we used embeddings extracted from pretrained convolutional neural networks (CNNs). An explainable AI (XAI) model was applied to interpret black-box model predictions and assess [...] Read more.
To identify the best transfer learning approach for the identification of the most frequent abnormalities on chest radiographs (CXRs), we used embeddings extracted from pretrained convolutional neural networks (CNNs). An explainable AI (XAI) model was applied to interpret black-box model predictions and assess its performance. Seven CNNs were trained on CheXpert. Three transfer learning approaches were thereafter applied to a local dataset. The classification results were ensembled using simple and entropy-weighted averaging. We applied Grad-CAM (an XAI model) to produce a saliency map. Grad-CAM maps were compared to manually extracted regions of interest, and the training time was recorded. The best transfer learning model was that which used image embeddings and random forest with simple averaging, with an average AUC of 0.856. Grad-CAM maps showed that the models focused on specific features of each CXR. CNNs pretrained on a large public dataset of medical images can be exploited as feature extractors for tasks of interest. The extracted image embeddings contain relevant information that can be used to train an additional classifier with satisfactory performance on an independent dataset, demonstrating it to be the optimal transfer learning strategy and overcoming the need for large private datasets, extensive computational resources, and long training times. Full article
Show Figures

Figure 1

20 pages, 5379 KiB  
Article
Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique
by Abdulaziz Fahad AlOthman, Abdul Rahaman Wahab Sait and Thamer Abdullah Alhussain
Diagnostics 2022, 12(9), 2073; https://doi.org/10.3390/diagnostics12092073 - 26 Aug 2022
Cited by 11 | Viewed by 3678
Abstract
In recent times, coronary artery disease (CAD) has become one of the leading causes of morbidity and mortality across the globe. Diagnosing the presence and severity of CAD in individuals is essential for choosing the best course of treatment. Presently, computed tomography (CT) [...] Read more.
In recent times, coronary artery disease (CAD) has become one of the leading causes of morbidity and mortality across the globe. Diagnosing the presence and severity of CAD in individuals is essential for choosing the best course of treatment. Presently, computed tomography (CT) provides high spatial resolution images of the heart and coronary arteries in a short period. On the other hand, there are many challenges in analyzing cardiac CT scans for signs of CAD. Research studies apply machine learning (ML) for high accuracy and consistent performance to overcome the limitations. It allows excellent visualization of the coronary arteries with high spatial resolution. Convolutional neural networks (CNN) are widely applied in medical image processing to identify diseases. However, there is a demand for efficient feature extraction to enhance the performance of ML techniques. The feature extraction process is one of the factors in improving ML techniques’ efficiency. Thus, the study intends to develop a method to detect CAD from CT angiography images. It proposes a feature extraction method and a CNN model for detecting the CAD in minimum time with optimal accuracy. Two datasets are utilized to evaluate the performance of the proposed model. The present work is unique in applying a feature extraction model with CNN for CAD detection. The experimental analysis shows that the proposed method achieves 99.2% and 98.73% prediction accuracy, with F1 scores of 98.95 and 98.82 for benchmark datasets. In addition, the outcome suggests that the proposed CNN model achieves the area under the receiver operating characteristic and precision-recall curve of 0.92 and 0.96, 0.91 and 0.90 for datasets 1 and 2, respectively. The findings highlight that the performance of the proposed feature extraction and CNN model is superior to the existing models. Full article
Show Figures

Figure 1

18 pages, 1219 KiB  
Article
Deep Transfer Learning for the Multilabel Classification of Chest X-ray Images
by Guan-Hua Huang, Qi-Jia Fu, Ming-Zhang Gu, Nan-Han Lu, Kuo-Ying Liu and Tai-Been Chen
Diagnostics 2022, 12(6), 1457; https://doi.org/10.3390/diagnostics12061457 - 13 Jun 2022
Cited by 15 | Viewed by 2513
Abstract
Chest X-ray (CXR) is widely used to diagnose conditions affecting the chest, its contents, and its nearby structures. In this study, we used a private data set containing 1630 CXR images with disease labels; most of the images were disease-free, but the others [...] Read more.
Chest X-ray (CXR) is widely used to diagnose conditions affecting the chest, its contents, and its nearby structures. In this study, we used a private data set containing 1630 CXR images with disease labels; most of the images were disease-free, but the others contained multiple sites of abnormalities. Here, we used deep convolutional neural network (CNN) models to extract feature representations and to identify possible diseases in these images. We also used transfer learning combined with large open-source image data sets to resolve the problems of insufficient training data and optimize the classification model. The effects of different approaches of reusing pretrained weights (model finetuning and layer transfer), source data sets of different sizes and similarity levels to the target data (ImageNet, ChestX-ray, and CheXpert), methods integrating source data sets into transfer learning (initiating, concatenating, and co-training), and backbone CNN models (ResNet50 and DenseNet121) on transfer learning were also assessed. The results demonstrated that transfer learning applied with the model finetuning approach typically afforded better prediction models. When only one source data set was adopted, ChestX-ray performed better than CheXpert; however, after ImageNet initials were attached, CheXpert performed better. ResNet50 performed better in initiating transfer learning, whereas DenseNet121 performed better in concatenating and co-training transfer learning. Transfer learning with multiple source data sets was preferable to that with a source data set. Overall, transfer learning can further enhance prediction capabilities and reduce computing costs for CXR images. Full article
Show Figures

Figure 1

15 pages, 1952 KiB  
Article
A Deep Modality-Specific Ensemble for Improving Pneumonia Detection in Chest X-rays
by Sivaramakrishnan Rajaraman, Peng Guo, Zhiyun Xue and Sameer K. Antani
Diagnostics 2022, 12(6), 1442; https://doi.org/10.3390/diagnostics12061442 - 11 Jun 2022
Cited by 12 | Viewed by 2449
Abstract
Pneumonia is an acute respiratory infectious disease caused by bacteria, fungi, or viruses. Fluid-filled lungs due to the disease result in painful breathing difficulties and reduced oxygen intake. Effective diagnosis is critical for appropriate and timely treatment and improving survival. Chest X-rays (CXRs) [...] Read more.
Pneumonia is an acute respiratory infectious disease caused by bacteria, fungi, or viruses. Fluid-filled lungs due to the disease result in painful breathing difficulties and reduced oxygen intake. Effective diagnosis is critical for appropriate and timely treatment and improving survival. Chest X-rays (CXRs) are routinely used to screen for the infection. Computer-aided detection methods using conventional deep learning (DL) models for identifying pneumonia-consistent manifestations in CXRs have demonstrated superiority over traditional machine learning approaches. However, their performance is still inadequate to aid in clinical decision-making. This study improves upon the state of the art as follows. Specifically, we train a DL classifier on large collections of CXR images to develop a CXR modality-specific model. Next, we use this model as the classifier backbone in the RetinaNet object detection network. We also initialize this backbone using random weights and ImageNet-pretrained weights. Finally, we construct an ensemble of the best-performing models resulting in improved detection of pneumonia-consistent findings. Experimental results demonstrate that an ensemble of the top-3 performing RetinaNet models outperformed individual models in terms of the mean average precision (mAP) metric (0.3272, 95% CI: (0.3006,0.3538)) toward this task, which is markedly higher than the state of the art (mAP: 0.2547). This performance improvement is attributed to the key modifications in initializing the weights of classifier backbones and constructing model ensembles to reduce prediction variance compared to individual constituent models. Full article
Show Figures

Figure 1

16 pages, 6094 KiB  
Article
Automated 3D Segmentation of the Aorta and Pulmonary Artery on Non-Contrast-Enhanced Chest Computed Tomography Images in Lung Cancer Patients
by Hao-Jen Wang, Li-Wei Chen, Hsin-Ying Lee, Yu-Jung Chung, Yan-Ting Lin, Yi-Chieh Lee, Yi-Chang Chen, Chung-Ming Chen and Mong-Wei Lin
Diagnostics 2022, 12(4), 967; https://doi.org/10.3390/diagnostics12040967 - 12 Apr 2022
Cited by 8 | Viewed by 3381 | Correction
Abstract
Pulmonary hypertension should be preoperatively evaluated for optimal surgical planning to reduce surgical risk in lung cancer patients. Preoperative measurement of vascular diameter in computed tomography (CT) images is a noninvasive prediction method for pulmonary hypertension. However, the current estimation method, 2D manual [...] Read more.
Pulmonary hypertension should be preoperatively evaluated for optimal surgical planning to reduce surgical risk in lung cancer patients. Preoperative measurement of vascular diameter in computed tomography (CT) images is a noninvasive prediction method for pulmonary hypertension. However, the current estimation method, 2D manual arterial diameter measurement, may yield inaccurate results owing to low tissue contrast in non-contrast-enhanced CT (NECT). Furthermore, it provides an incomplete evaluation by measuring only the diameter of the arteries rather than the volume. To provide a more complete and accurate estimation, this study proposed a novel two-stage deep learning (DL) model for 3D aortic and pulmonary artery segmentation in NECT. In the first stage, a DL model was constructed to enhance the contrast of NECT; in the second stage, two DL models then applied the enhanced images for aorta and pulmonary artery segmentation. Overall, 179 patients were divided into contrast enhancement model (n = 59), segmentation model (n = 120), and testing (n = 20) groups. The performance of the proposed model was evaluated using Dice similarity coefficient (DSC). The proposed model could achieve 0.97 ± 0.007 and 0.93 ± 0.002 DSC for aortic and pulmonary artery segmentation, respectively. The proposed model may provide 3D diameter information of the arteries before surgery, facilitating the estimation of pulmonary hypertension and supporting preoperative surgical method selection based on the predicted surgical risks. Full article
Show Figures

Figure 1

16 pages, 859 KiB  
Article
A Rotational Invariant Neural Network for Electrical Impedance Tomography Imaging without Reference Voltage: RF-REIM-NET
by Jöran Rixen, Benedikt Eliasson, Benjamin Hentze, Thomas Muders, Christian Putensen, Steffen Leonhardt and Chuong Ngo
Diagnostics 2022, 12(4), 777; https://doi.org/10.3390/diagnostics12040777 - 22 Mar 2022
Cited by 6 | Viewed by 2342
Abstract
Background: Electrical Impedance Tomography (EIT) is a radiation-free technique for image reconstruction. However, as the inverse problem of EIT is non-linear and ill-posed, the reconstruction of sharp conductivity images poses a major problem. With the emergence of artificial neural networks (ANN), their [...] Read more.
Background: Electrical Impedance Tomography (EIT) is a radiation-free technique for image reconstruction. However, as the inverse problem of EIT is non-linear and ill-posed, the reconstruction of sharp conductivity images poses a major problem. With the emergence of artificial neural networks (ANN), their application in EIT has recently gained interest. Methodology: We propose an ANN that can solve the inverse problem without the presence of a reference voltage. At the end of the ANN, we reused the dense layers multiple times, considering that the EIT exhibits rotational symmetries in a circular domain. To avoid bias in training data, the conductivity range used in the simulations was greater than expected in measurements. We also propose a new method that creates new data samples from existing training data. Results: We show that our ANN is more robust with respect to noise compared with the analytical Gauss–Newton approach. The reconstruction results for EIT phantom tank measurements are also clearer, as ringing artefacts are less pronounced. To evaluate the performance of the ANN under real-world conditions, we perform reconstructions on an experimental pig study with computed tomography for comparison. Conclusions: Our proposed ANN can reconstruct EIT images without the need of a reference voltage. Full article
Show Figures

Figure 1

23 pages, 2135 KiB  
Article
Generalization Challenges in Drug-Resistant Tuberculosis Detection from Chest X-rays
by Manohar Karki, Karthik Kantipudi, Feng Yang, Hang Yu, Yi Xiang J. Wang, Ziv Yaniv and Stefan Jaeger
Diagnostics 2022, 12(1), 188; https://doi.org/10.3390/diagnostics12010188 - 13 Jan 2022
Cited by 14 | Viewed by 3959
Abstract
Classification of drug-resistant tuberculosis (DR-TB) and drug-sensitive tuberculosis (DS-TB) from chest radiographs remains an open problem. Our previous cross validation performance on publicly available chest X-ray (CXR) data combined with image augmentation, the addition of synthetically generated and publicly available images achieved a [...] Read more.
Classification of drug-resistant tuberculosis (DR-TB) and drug-sensitive tuberculosis (DS-TB) from chest radiographs remains an open problem. Our previous cross validation performance on publicly available chest X-ray (CXR) data combined with image augmentation, the addition of synthetically generated and publicly available images achieved a performance of 85% AUC with a deep convolutional neural network (CNN). However, when we evaluated the CNN model trained to classify DR-TB and DS-TB on unseen data, significant performance degradation was observed (65% AUC). Hence, in this paper, we investigate the generalizability of our models on images from a held out country’s dataset. We explore the extent of the problem and the possible reasons behind the lack of good generalization. A comparison of radiologist-annotated lesion locations in the lung and the trained model’s localization of areas of interest, using GradCAM, did not show much overlap. Using the same network architecture, a multi-country classifier was able to identify the country of origin of the X-ray with high accuracy (86%), suggesting that image acquisition differences and the distribution of non-pathological and non-anatomical aspects of the images are affecting the generalization and localization of the drug resistance classification model as well. When CXR images were severely corrupted, the performance on the validation set was still better than 60% AUC. The model overfitted to the data from countries in the cross validation set but did not generalize to the held out country. Finally, we applied a multi-task based approach that uses prior TB lesions location information to guide the classifier network to focus its attention on improving the generalization performance on the held out set from another country to 68% AUC. Full article
Show Figures

Figure 1

13 pages, 1641 KiB  
Article
Deep Learning Supplants Visual Analysis by Experienced Operators for the Diagnosis of Cardiac Amyloidosis by Cine-CMR
by Philippe Germain, Armine Vardazaryan, Nicolas Padoy, Aissam Labani, Catherine Roy, Thomas Hellmut Schindler and Soraya El Ghannudi
Diagnostics 2022, 12(1), 69; https://doi.org/10.3390/diagnostics12010069 - 29 Dec 2021
Cited by 6 | Viewed by 2136
Abstract
Background: Diagnosing cardiac amyloidosis (CA) from cine-CMR (cardiac magnetic resonance) alone is not reliable. In this study, we tested if a convolutional neural network (CNN) could outperform the visual diagnosis of experienced operators. Method: 119 patients with cardiac amyloidosis and 122 patients with [...] Read more.
Background: Diagnosing cardiac amyloidosis (CA) from cine-CMR (cardiac magnetic resonance) alone is not reliable. In this study, we tested if a convolutional neural network (CNN) could outperform the visual diagnosis of experienced operators. Method: 119 patients with cardiac amyloidosis and 122 patients with left ventricular hypertrophy (LVH) of other origins were retrospectively selected. Diastolic and systolic cine-CMR images were preprocessed and labeled. A dual-input visual geometry group (VGG ) model was used for binary image classification. All images belonging to the same patient were distributed in the same set. Accuracy and area under the curve (AUC) were calculated per frame and per patient from a 40% held-out test set. Results were compared to a visual analysis assessed by three experienced operators. Results: frame-based comparisons between humans and a CNN provided an accuracy of 0.605 vs. 0.746 (p < 0.0008) and an AUC of 0.630 vs. 0.824 (p < 0.0001). Patient-based comparisons provided an accuracy of 0.660 vs. 0.825 (p < 0.008) and an AUC of 0.727 vs. 0.895 (p < 0.002). Conclusion: based on cine-CMR images alone, a CNN is able to discriminate cardiac amyloidosis from LVH of other origins better than experienced human operators (15 to 20 points more in absolute value for accuracy and AUC), demonstrating a unique capability to identify what the eyes cannot see through classical radiological analysis. Full article
Show Figures

Figure 1

16 pages, 7610 KiB  
Article
VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images
by Muhammad Attique Khan, Venkatesan Rajinikanth, Suresh Chandra Satapathy, David Taniar, Jnyana Ranjan Mohanty, Usman Tariq and Robertas Damaševičius
Diagnostics 2021, 11(12), 2208; https://doi.org/10.3390/diagnostics11122208 - 26 Nov 2021
Cited by 81 | Viewed by 5321
Abstract
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework [...] Read more.
Pulmonary nodule is one of the lung diseases and its early diagnosis and treatment are essential to cure the patient. This paper introduces a deep learning framework to support the automated detection of lung nodules in computed tomography (CT) images. The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy. The images used for experiments are collected from the LIDC-IDRI and Lung-PET-CT-Dx datasets. The experimental results attained show that the VGG19 architecture with concatenated deep and handcrafted features can achieve an accuracy of 97.83% with the SVM-RBF classifier. Full article
Show Figures

Figure 1

36 pages, 19087 KiB  
Article
Inter-Variability Study of COVLIAS 1.0: Hybrid Deep Learning Models for COVID-19 Lung Segmentation in Computed Tomography
by Jasjit S. Suri, Sushant Agarwal, Pranav Elavarthi, Rajesh Pathak, Vedmanvitha Ketireddy, Marta Columbu, Luca Saba, Suneet K. Gupta, Gavino Faa, Inder M. Singh, Monika Turk, Paramjit S. Chadha, Amer M. Johri, Narendra N. Khanna, Klaudija Viskovic, Sophie Mavrogeni, John R. Laird, Gyan Pareek, Martin Miner, David W. Sobel, Antonella Balestrieri, Petros P. Sfikakis, George Tsoulfas, Athanasios Protogerou, Durga Prasanna Misra, Vikas Agarwal, George D. Kitas, Jagjit S. Teji, Mustafa Al-Maini, Surinder K. Dhanjil, Andrew Nicolaides, Aditya Sharma, Vijay Rathore, Mostafa Fatemi, Azra Alizad, Pudukode R. Krishnan, Ferenc Nagy, Zoltan Ruzsa, Archna Gupta, Subbaram Naidu and Mannudeep K. Kalraadd Show full author list remove Hide full author list
Diagnostics 2021, 11(11), 2025; https://doi.org/10.3390/diagnostics11112025 - 1 Nov 2021
Cited by 20 | Viewed by 3714
Abstract
Background: For COVID-19 lung severity, segmentation of lungs on computed tomography (CT) is the first crucial step. Current deep learning (DL)-based Artificial Intelligence (AI) models have a bias in the training stage of segmentation because only one set of ground truth (GT) [...] Read more.
Background: For COVID-19 lung severity, segmentation of lungs on computed tomography (CT) is the first crucial step. Current deep learning (DL)-based Artificial Intelligence (AI) models have a bias in the training stage of segmentation because only one set of ground truth (GT) annotations are evaluated. We propose a robust and stable inter-variability analysis of CT lung segmentation in COVID-19 to avoid the effect of bias. Methodology: The proposed inter-variability study consists of two GT tracers for lung segmentation on chest CT. Three AI models, PSP Net, VGG-SegNet, and ResNet-SegNet, were trained using GT annotations. We hypothesized that if AI models are trained on the GT tracings from multiple experience levels, and if the AI performance on the test data between these AI models is within the 5% range, one can consider such an AI model robust and unbiased. The K5 protocol (training to testing: 80%:20%) was adapted. Ten kinds of metrics were used for performance evaluation. Results: The database consisted of 5000 CT chest images from 72 COVID-19-infected patients. By computing the coefficient of correlations (CC) between the output of the two AI models trained corresponding to the two GT tracers, computing their differences in their CC, and repeating the process for all three AI-models, we show the differences as 0%, 0.51%, and 2.04% (all < 5%), thereby validating the hypothesis. The performance was comparable; however, it had the following order: ResNet-SegNet > PSP Net > VGG-SegNet. Conclusions: The AI models were clinically robust and stable during the inter-variability analysis on the CT lung segmentation on COVID-19 patients. Full article
Show Figures

Figure 1

11 pages, 1585 KiB  
Article
Diagnostic Performance of Dual-Energy Subtraction Radiography for the Detection of Pulmonary Emphysema: An Intra-Individual Comparison
by Julia A. Mueller, Katharina Martini, Matthias Eberhard, Mathias A. Mueller, Alessandra A. De Silvestro, Philipp Breiding and Thomas Frauenfelder
Diagnostics 2021, 11(10), 1849; https://doi.org/10.3390/diagnostics11101849 - 7 Oct 2021
Cited by 2 | Viewed by 2287
Abstract
Purpose/Objectives: To compare the diagnostic performance of dual-energy subtraction (DE) and conventional radiography (CR) for detecting pulmonary emphysema using computed tomography (CT) as a reference standard. Methods and Materials: Sixty-six patients (24 female, median age 73) were retrospectively included after obtaining lateral and [...] Read more.
Purpose/Objectives: To compare the diagnostic performance of dual-energy subtraction (DE) and conventional radiography (CR) for detecting pulmonary emphysema using computed tomography (CT) as a reference standard. Methods and Materials: Sixty-six patients (24 female, median age 73) were retrospectively included after obtaining lateral and posteroanterior chest X-rays with a dual-shot DE technique and chest CT within ±3 months. Two experienced radiologists first evaluated the standard CR images and, second, the bone-/soft tissue weighted DE images for the presence (yes/no), degree (1–4), and quadrant-based distribution of emphysema. CT was used as a reference standard. Inter-reader agreement was calculated. Sensitivity and specificity for the correct detection and localization of emphysema was calculated. Further degree of emphysema on CR and DE was correlated with results from CT. A p-value < 0.05 was considered as statistically significant. Results: The mean interreader agreement was substantial for CR and moderate for DE (kCR = 0.611 vs. kDE = 0.433; respectively). Sensitivity, as well as specificity for the detection of emphysema, was comparable between CR and DE (sensitivityCR 96% and specificityCR 75% vs. sensitivityDE 91% and specificityDE 83%; p = 0.157). Similarly, there was no significant difference in the sensitivity or specificity for emphysema localization between CR and DE (sensitivityCR 50% and specificityCR 100% vs. sensitivityDE 57% and specificityDE 100%; p = 0.157). There was a slightly better correlation with CT of emphysema grading in DE compared to CR (rDE = 0.75 vs. rCR = 0.68; p = 0.108); these differences were not statistically significant, however. Conclusion: Diagnostic accuracy for the detection, quantification, and localization of emphysema between CR and DE is comparable. Interreader agreement, however, is better with CR compared to DE Full article
Show Figures

Figure 1

Other

Jump to: Editorial, Research

17 pages, 1455 KiB  
Systematic Review
The Added Effect of Artificial Intelligence on Physicians’ Performance in Detecting Thoracic Pathologies on CT and Chest X-ray: A Systematic Review
by Dana Li, Lea Marie Pehrson, Carsten Ammitzbøl Lauridsen, Lea Tøttrup, Marco Fraccaro, Desmond Elliott, Hubert Dariusz Zając, Sune Darkner, Jonathan Frederik Carlsen and Michael Bachmann Nielsen
Diagnostics 2021, 11(12), 2206; https://doi.org/10.3390/diagnostics11122206 - 26 Nov 2021
Cited by 22 | Viewed by 4191
Abstract
Our systematic review investigated the additional effect of artificial intelligence-based devices on human observers when diagnosing and/or detecting thoracic pathologies using different diagnostic imaging modalities, such as chest X-ray and CT. Peer-reviewed, original research articles from EMBASE, PubMed, Cochrane library, SCOPUS, and Web [...] Read more.
Our systematic review investigated the additional effect of artificial intelligence-based devices on human observers when diagnosing and/or detecting thoracic pathologies using different diagnostic imaging modalities, such as chest X-ray and CT. Peer-reviewed, original research articles from EMBASE, PubMed, Cochrane library, SCOPUS, and Web of Science were retrieved. Included articles were published within the last 20 years and used a device based on artificial intelligence (AI) technology to detect or diagnose pulmonary findings. The AI-based device had to be used in an observer test where the performance of human observers with and without addition of the device was measured as sensitivity, specificity, accuracy, AUC, or time spent on image reading. A total of 38 studies were included for final assessment. The quality assessment tool for diagnostic accuracy studies (QUADAS-2) was used for bias assessment. The average sensitivity increased from 67.8% to 74.6%; specificity from 82.2% to 85.4%; accuracy from 75.4% to 81.7%; and Area Under the ROC Curve (AUC) from 0.75 to 0.80. Generally, a faster reading time was reported when radiologists were aided by AI-based devices. Our systematic review showed that performance generally improved for the physicians when assisted by AI-based devices compared to unaided interpretation. Full article
Show Figures

Figure 1

Back to TopTop