Artificial Intelligence (AI) for Medical Image Processing

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (20 February 2024) | Viewed by 20052

Special Issue Editors

Radiology Department, The Ohio State University, Columbus, OH, USA
Interests: pediatric neuroradiology; radiology; neuroimaging; advanced imaging and precision health

E-Mail Website
Guest Editor
Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, China
Interests: machine learning; medical image processing; application of artifical intelligence in clinical diagnosis

Special Issue Information

Dear Colleagues

In recent years, Artificial Intelligence (AI) technology has been widely used in medical image processing to achieve screening clinical lesions. In general, classification, segmentation, and location of clinical lesions using medical images are major tasks for deep learning. In addition, medical image registration based on deep learning is an important research direction.

This Special Issue entitled “Artificial Intelligence for Medical Image Processing” aims to highlight the most recent advances in the field of deep learning for medical image processing. We invite authors to submit original research articles as well as review articles that focus on (but are not limited to) the following topics:

  • Advanced deep learning methods for medical image classification, segmentation, and registration;
  • Lesion identification and localization in medical images;
  • Deep learning of multimodal medical data (including medical images);
  • The interpretability of deep learning for medical images;
  • Deep learning of unbalanced samples for medical images;
  • Efficient annotation of medical images for deep learning;
  • Small sample learning for medical images;
  • Quality control stress test for deep learning of medical images.

Dr. Mailan Ho
Dr. Yongjian Nian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • medical image classification
  • medical image segmentation
  • medical image registration
  • lesions identification and localization
  • quality control

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 1258 KiB  
Article
Detection of Cervical Lesion Cell/Clumps Based on Adaptive Feature Extraction
by Gang Li, Xingguang Li, Yuting Wang, Shu Gong, Yanting Yang and Chuanyun Xu
Bioengineering 2024, 11(7), 686; https://doi.org/10.3390/bioengineering11070686 - 5 Jul 2024
Viewed by 929
Abstract
Automated detection of cervical lesion cell/clumps in cervical cytological images is essential for computer-aided diagnosis. In this task, the shape and size of the lesion cell/clumps appeared to vary considerably, reducing the detection performance of cervical lesion cell/clumps. To address the issue, we [...] Read more.
Automated detection of cervical lesion cell/clumps in cervical cytological images is essential for computer-aided diagnosis. In this task, the shape and size of the lesion cell/clumps appeared to vary considerably, reducing the detection performance of cervical lesion cell/clumps. To address the issue, we propose an adaptive feature extraction network for cervical lesion cell/clumps detection, called AFE-Net. Specifically, we propose the adaptive module to acquire the features of cervical lesion cell/clumps, while introducing the global bias mechanism to acquire the global average information, aiming at combining the adaptive features with the global information to improve the representation of the target features in the model, and thus enhance the detection performance of the model. Furthermore, we analyze the results of the popular bounding box loss on the model and propose the new bounding box loss tendency-IoU (TIoU). Finally, the network achieves the mean Average Precision (mAP) of 64.8% on the CDetector dataset, with 30.7 million parameters. Compared with YOLOv7 of 62.6% and 34.8M, the model improved mAP by 2.2% and reduced the number of parameters by 11.8%. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

15 pages, 5568 KiB  
Article
Feature Extraction Based on Local Histogram with Unequal Bins and a Recurrent Neural Network for the Diagnosis of Kidney Diseases from CT Images
by Abdorreza Alavi Gharahbagh, Vahid Hajihashemi, José J. M. Machado and João Manuel R. S. Tavares
Bioengineering 2024, 11(3), 220; https://doi.org/10.3390/bioengineering11030220 - 25 Feb 2024
Cited by 1 | Viewed by 1419
Abstract
Kidney disease remains one of the most common ailments worldwide, with cancer being one of its most common forms. Early diagnosis can significantly increase the good prognosis for the patient. The development of an artificial intelligence-based system to assist in kidney cancer diagnosis [...] Read more.
Kidney disease remains one of the most common ailments worldwide, with cancer being one of its most common forms. Early diagnosis can significantly increase the good prognosis for the patient. The development of an artificial intelligence-based system to assist in kidney cancer diagnosis is crucial because kidney illness is a global health concern, and there are limited nephrologists qualified to evaluate kidney cancer. Diagnosing and categorising different forms of renal failure presents the biggest treatment hurdle for kidney cancer. Thus, this article presents a novel method for detecting and classifying kidney cancer subgroups in Computed Tomography (CT) images based on an asymmetric local statistical pixel distribution. In the first step, the input image is non-overlapping windowed, and a statistical distribution of its pixels in each cancer type is built. Then, the method builds the asymmetric statistical distribution of the image’s gradient pixels. Finally, the cancer type is identified by applying the two built statistical distributions to a Deep Neural Network (DNN). The proposed method was evaluated using a dataset collected and authorised by the Dhaka Central International Medical Hospital in Bangladesh, which includes 12,446 CT images of the whole abdomen and urogram, acquired with and without contrast. Based on the results, it is possible to confirm that the proposed method outperformed state-of-the-art methods in terms of the usual correctness criteria. The accuracy of the proposed method for all kidney cancer subtypes presented in the dataset was 99.89%, which is promising. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

24 pages, 13599 KiB  
Article
A Critical Assessment of Generative Models for Synthetic Data Augmentation on Limited Pneumonia X-ray Data
by Daniel Schaudt, Christian Späte, Reinhold von Schwerin, Manfred Reichert, Marianne von Schwerin, Meinrad Beer and Christopher Kloth
Bioengineering 2023, 10(12), 1421; https://doi.org/10.3390/bioengineering10121421 - 14 Dec 2023
Cited by 1 | Viewed by 2371
Abstract
In medical imaging, deep learning models serve as invaluable tools for expediting diagnoses and aiding specialized medical professionals in making clinical decisions. However, effectively training deep learning models typically necessitates substantial quantities of high-quality data, a resource often lacking in numerous medical imaging [...] Read more.
In medical imaging, deep learning models serve as invaluable tools for expediting diagnoses and aiding specialized medical professionals in making clinical decisions. However, effectively training deep learning models typically necessitates substantial quantities of high-quality data, a resource often lacking in numerous medical imaging scenarios. One way to overcome this deficiency is to artificially generate such images. Therefore, in this comparative study we train five generative models to artificially increase the amount of available data in such a scenario. This synthetic data approach is evaluated on a a downstream classification task, predicting four causes for pneumonia as well as healthy cases on 1082 chest X-ray images. Quantitative and medical assessments show that a Generative Adversarial Network (GAN)-based approach significantly outperforms more recent diffusion-based approaches on this limited dataset with better image quality and pathological plausibility. We show that better image quality surprisingly does not translate to improved classification performance by evaluating five different classification models and varying the amount of additional training data. Class-specific metrics like precision, recall, and F1-score show a substantial improvement by using synthetic images, emphasizing the data rebalancing effect of less frequent classes. However, overall performance does not improve for most models and configurations, except for a DreamBooth approach which shows a +0.52 improvement in overall accuracy. The large variance of performance impact in this study suggests a careful consideration of utilizing generative models for limited data scenarios, especially with an unexpected negative correlation between image quality and downstream classification improvement. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

18 pages, 3655 KiB  
Article
Enhancing the Super-Resolution of Medical Images: Introducing the Deep Residual Feature Distillation Channel Attention Network for Optimized Performance and Efficiency
by Sabina Umirzakova, Sevara Mardieva, Shakhnoza Muksimova, Shabir Ahmad and Taegkeun Whangbo
Bioengineering 2023, 10(11), 1332; https://doi.org/10.3390/bioengineering10111332 - 19 Nov 2023
Cited by 15 | Viewed by 2737
Abstract
In the advancement of medical image super-resolution (SR), the Deep Residual Feature Distillation Channel Attention Network (DRFDCAN) marks a significant step forward. This work presents DRFDCAN, a model that innovates traditional SR approaches by introducing a channel attention block that is tailored for [...] Read more.
In the advancement of medical image super-resolution (SR), the Deep Residual Feature Distillation Channel Attention Network (DRFDCAN) marks a significant step forward. This work presents DRFDCAN, a model that innovates traditional SR approaches by introducing a channel attention block that is tailored for high-frequency features—crucial for the nuanced details in medical diagnostics—while streamlining the network structure for enhanced computational efficiency. DRFDCAN’s architecture adopts a residual-within-residual design to facilitate faster inference and reduce memory demands without compromising the integrity of the image reconstruction. This design strategy, combined with an innovative feature extraction method that emphasizes the utility of the initial layer features, allows for improved image clarity and is particularly effective in optimizing the peak signal-to-noise ratio (PSNR). The proposed work redefines efficiency in SR models, outperforming established frameworks like RFDN by improving model compactness and accelerating inference. The meticulous crafting of a feature extractor that effectively captures edge and texture information exemplifies the model’s capacity to render detailed images, necessary for accurate medical analysis. The implications of this study are two-fold: it presents a viable solution for deploying SR technology in real-time medical applications, and it sets a precedent for future models that address the delicate balance between computational efficiency and high-fidelity image reconstruction. This balance is paramount in medical applications where the clarity of images can significantly influence diagnostic outcomes. The DRFDCAN model thus stands as a transformative contribution to the field of medical image super-resolution. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

18 pages, 5112 KiB  
Article
Detection of Image Artifacts Using Improved Cascade Region-Based CNN for Quality Assessment of Endoscopic Images
by Wei Sun, Peng Li, Yan Liang, Yadong Feng and Lingxiao Zhao
Bioengineering 2023, 10(11), 1288; https://doi.org/10.3390/bioengineering10111288 - 6 Nov 2023
Viewed by 1868
Abstract
Endoscopy is a commonly used clinical method for gastrointestinal disorders. However, the complexity of the gastrointestinal environment can lead to artifacts. Consequently, the artifacts affect the visual perception of images captured during endoscopic examinations. Existing methods to assess image quality with no reference [...] Read more.
Endoscopy is a commonly used clinical method for gastrointestinal disorders. However, the complexity of the gastrointestinal environment can lead to artifacts. Consequently, the artifacts affect the visual perception of images captured during endoscopic examinations. Existing methods to assess image quality with no reference display limitations: some are artifact-specific, while others are poorly interpretable. This study presents an improved cascade region-based convolutional neural network (CNN) for detecting gastrointestinal artifacts to quantitatively assess the quality of endoscopic images. This method detects eight artifacts in endoscopic images and provides their localization, classification, and confidence scores; these scores represent image quality assessment results. The artifact detection component of this method enhances the feature pyramid structure, incorporates the channel attention mechanism into the feature extraction process, and combines shallow and deep features to improve the utilization of spatial information. The detection results are further used for image quality assessment. Experimental results using white light imaging, narrow-band imaging, and iodine-stained images demonstrate that the proposed artifact detection method achieved the highest average precision (62.4% at a 50% IOU threshold). Compared to the typical networks, the accuracy of this algorithm is improved. Furthermore, three clinicians validated that the proposed image quality assessment method based on the object detection of endoscopy artifacts achieves a correlation coefficient of 60.71%. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

18 pages, 3195 KiB  
Article
CLRD: Collaborative Learning for Retinopathy Detection Using Fundus Images
by Yuan Gao, Chenbin Ma, Lishuang Guo, Xuxiang Zhang and Xunming Ji
Bioengineering 2023, 10(8), 978; https://doi.org/10.3390/bioengineering10080978 - 18 Aug 2023
Cited by 1 | Viewed by 1263
Abstract
Retinopathy, a prevalent disease causing visual impairment and sometimes blindness, affects many individuals in the population. Early detection and treatment of the disease can be facilitated by monitoring the retina using fundus imaging. Nonetheless, the limited availability of fundus images and the imbalanced [...] Read more.
Retinopathy, a prevalent disease causing visual impairment and sometimes blindness, affects many individuals in the population. Early detection and treatment of the disease can be facilitated by monitoring the retina using fundus imaging. Nonetheless, the limited availability of fundus images and the imbalanced datasets warrant the development of more precise and efficient algorithms to enhance diagnostic performance. This study presents a novel online knowledge distillation framework, called CLRD, which employs a collaborative learning approach for detecting retinopathy. By combining student models with varying scales and architectures, the CLRD framework extracts crucial pathological information from fundus images. The transfer of knowledge is accomplished by developing distortion information particular to fundus images, thereby enhancing model invariance. Our selection of student models includes the Transformer-based BEiT and the CNN-based ConvNeXt, which achieve accuracies of 98.77% and 96.88%, respectively. Furthermore, the proposed method has 5.69–23.13%, 5.37–23.73%, 5.74–23.17%, 11.24–45.21%, and 5.87–24.96% higher accuracy, precision, recall, specificity, and F1 score, respectively, compared to the advanced visual model. The results of our study indicate that the CLRD framework can effectively minimize generalization errors without compromising independent predictions made by student models, offering novel directions for further investigations into detecting retinopathy. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

26 pages, 5830 KiB  
Article
Co-ERA-Net: Co-Supervision and Enhanced Region Attention for Accurate Segmentation in COVID-19 Chest Infection Images
by Zebang He, Alex Ngai Nick Wong and Jung Sun Yoo
Bioengineering 2023, 10(8), 928; https://doi.org/10.3390/bioengineering10080928 - 4 Aug 2023
Cited by 1 | Viewed by 1439
Abstract
Accurate segmentation of infected lesions in chest images remains a challenging task due to the lack of utilization of lung region information, which could serve as a strong location hint for infection. In this paper, we propose a novel segmentation network Co-ERA-Net for [...] Read more.
Accurate segmentation of infected lesions in chest images remains a challenging task due to the lack of utilization of lung region information, which could serve as a strong location hint for infection. In this paper, we propose a novel segmentation network Co-ERA-Net for infections in chest images that leverages lung region information by enhancing supervised information and fusing multi-scale lung region and infection information at different levels. To achieve this, we introduce a Co-supervision scheme incorporating lung region information to guide the network to accurately locate infections within the lung region. Furthermore, we design an Enhanced Region Attention Module (ERAM) to highlight regions with a high probability of infection by incorporating infection information into the lung region information. The effectiveness of the proposed scheme is demonstrated using COVID-19 CT and X-ray datasets, with the results showing that the proposed schemes and modules are promising. Based on the baseline, the Co-supervision scheme, when integrated with lung region information, improves the Dice coefficient by 7.41% and 2.22%, and the IoU by 8.20% and 3.00% in CT and X-ray datasets respectively. Moreover, when this scheme is combined with the Enhanced Region Attention Module, the Dice coefficient sees further improvement of 14.24% and 2.97%, with the IoU increasing by 28.64% and 4.49% for the same datasets. In comparison with existing approaches across various datasets, our proposed method achieves better segmentation performance in all main metrics and exhibits the best generalization and comprehensive performance. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

16 pages, 1975 KiB  
Article
CSF-Glioma: A Causal Segmentation Framework for Accurate Grading and Subregion Identification of Gliomas
by Yao Zheng, Dong Huang, Yuefei Feng, Xiaoshuo Hao, Yutao He and Yang Liu
Bioengineering 2023, 10(8), 887; https://doi.org/10.3390/bioengineering10080887 - 26 Jul 2023
Cited by 1 | Viewed by 1116
Abstract
Deep networks have shown strong performance in glioma grading; however, interpreting their decisions remains challenging due to glioma heterogeneity. To address these challenges, the proposed solution is the Causal Segmentation Framework (CSF). This framework aims to accurately predict high- and low-grade gliomas while [...] Read more.
Deep networks have shown strong performance in glioma grading; however, interpreting their decisions remains challenging due to glioma heterogeneity. To address these challenges, the proposed solution is the Causal Segmentation Framework (CSF). This framework aims to accurately predict high- and low-grade gliomas while simultaneously highlighting key subregions. Our framework utilizes a shrinkage segmentation method to identify subregions containing essential decision information. Moreover, we introduce a glioma grading module that combines deep learning and traditional approaches for precise grading. Our proposed model achieves the best performance among all models, with an AUC of 96.14%, an F1 score of 93.74%, an accuracy of 91.04%, a sensitivity of 91.83%, and a specificity of 88.88%. Additionally, our model exhibits efficient resource utilization, completing predictions within 2.31s and occupying only 0.12 GB of memory during the test phase. Furthermore, our approach provides clear and specific visualizations of key subregions, surpassing other methods in terms of interpretability. In conclusion, the Causal Segmentation Framework (CSF) demonstrates its effectiveness at accurately predicting glioma grades and identifying key subregions. The inclusion of causality in the CSF model enhances the reliability and accuracy of preoperative decision-making for gliomas. The interpretable results provided by the CSF model can assist clinicians in their assessment and treatment planning. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

18 pages, 3238 KiB  
Article
An Innovative Three-Stage Model for Prenatal Genetic Disorder Detection Based on Region-of-Interest in Fetal Ultrasound
by Jiajie Tang, Jin Han, Yuxuan Jiang, Jiaxin Xue, Hang Zhou, Lianting Hu, Caiyuan Chen and Long Lu
Bioengineering 2023, 10(7), 873; https://doi.org/10.3390/bioengineering10070873 - 23 Jul 2023
Viewed by 1965
Abstract
A global survey has revealed that genetic syndromes affect approximately 8% of the population, but most genetic diagnoses are typically made after birth. Facial deformities are commonly associated with chromosomal disorders. Prenatal diagnosis through ultrasound imaging is vital for identifying abnormal fetal facial [...] Read more.
A global survey has revealed that genetic syndromes affect approximately 8% of the population, but most genetic diagnoses are typically made after birth. Facial deformities are commonly associated with chromosomal disorders. Prenatal diagnosis through ultrasound imaging is vital for identifying abnormal fetal facial features. However, this approach faces challenges such as inconsistent diagnostic criteria and limited coverage. To address this gap, we have developed FGDS, a three-stage model that utilizes fetal ultrasound images to detect genetic disorders. Our model was trained on a dataset of 2554 images. Specifically, FGDS employs object detection technology to extract key regions and integrates disease information from each region through ensemble learning. Experimental results demonstrate that FGDS accurately recognizes the anatomical structure of the fetal face, achieving an average precision of 0.988 across all classes. In the internal test set, FGDS achieves a sensitivity of 0.753 and a specificity of 0.889. Moreover, in the external test set, FGDS outperforms mainstream deep learning models with a sensitivity of 0.768 and a specificity of 0.837. This study highlights the potential of our proposed three-stage ensemble learning model for screening fetal genetic disorders. It showcases the model’s ability to enhance detection rates in clinical practice and alleviate the burden on medical professionals. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

16 pages, 2866 KiB  
Article
Bio-Inspired Network for Diagnosing Liver Steatosis in Ultrasound Images
by Yuan Yao, Zhenguang Zhang, Bo Peng and Jin Tang
Bioengineering 2023, 10(7), 768; https://doi.org/10.3390/bioengineering10070768 - 26 Jun 2023
Cited by 1 | Viewed by 1500
Abstract
Using ultrasound imaging to diagnose liver steatosis is of great significance for preventing diseases such as cirrhosis and liver cancer. Accurate diagnosis under conditions of low quality, noise and poor resolutions is still a challenging task. Physiological studies have shown that the visual [...] Read more.
Using ultrasound imaging to diagnose liver steatosis is of great significance for preventing diseases such as cirrhosis and liver cancer. Accurate diagnosis under conditions of low quality, noise and poor resolutions is still a challenging task. Physiological studies have shown that the visual cortex of the biological visual system has selective attention neural mechanisms and feedback regulation of high features to low features. When processing visual information, these cortical regions selectively focus on more sensitive information and ignore unimportant details, which can effectively extract important features from visual information. Inspired by this, we propose a new diagnostic network for hepatic steatosis. In order to simulate the selection mechanism and feedback regulation of the visual cortex in the ventral pathway, it consists of a receptive field feature extraction module, parallel attention module and feedback connection. The receptive field feature extraction module corresponds to the inhibition of the non-classical receptive field of V1 neurons on the classical receptive field. It processes the input image to suppress the unimportant background texture. Two types of attention are adopted in the parallel attention module to process the same visual information and extract different important features for fusion, which improves the overall performance of the model. In addition, we construct a new dataset of fatty liver ultrasound images and validate the proposed model on this dataset. The experimental results show that the network has good performance in terms of sensitivity, specificity and accuracy for the diagnosis of fatty liver disease. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

12 pages, 1275 KiB  
Article
DeepCOVID-Fuse: A Multi-Modality Deep Learning Model Fusing Chest X-rays and Clinical Variables to Predict COVID-19 Risk Levels
by Yunan Wu, Amil Dravid, Ramsey Michael Wehbe and Aggelos K. Katsaggelos
Bioengineering 2023, 10(5), 556; https://doi.org/10.3390/bioengineering10050556 - 5 May 2023
Cited by 3 | Viewed by 1952
Abstract
The COVID-19 pandemic has posed unprecedented challenges to global healthcare systems, highlighting the need for accurate and timely risk prediction models that can prioritize patient care and allocate resources effectively. This study presents DeepCOVID-Fuse, a deep learning fusion model that predicts risk levels [...] Read more.
The COVID-19 pandemic has posed unprecedented challenges to global healthcare systems, highlighting the need for accurate and timely risk prediction models that can prioritize patient care and allocate resources effectively. This study presents DeepCOVID-Fuse, a deep learning fusion model that predicts risk levels in patients with confirmed COVID-19 by combining chest radiographs (CXRs) and clinical variables. The study collected initial CXRs, clinical variables, and outcomes (i.e., mortality, intubation, hospital length of stay, Intensive care units (ICU) admission) from February to April 2020, with risk levels determined by the outcomes. The fusion model was trained on 1657 patients (Age: 58.30 ± 17.74; Female: 807) and validated on 428 patients (56.41 ± 17.03; 190) from the local healthcare system and tested on 439 patients (56.51 ± 17.78; 205) from a different holdout hospital. The performance of well-trained fusion models on full or partial modalities was compared using DeLong and McNemar tests. Results show that DeepCOVID-Fuse significantly (p < 0.05) outperformed models trained only on CXRs or clinical variables, with an accuracy of 0.658 and an area under the receiver operating characteristic curve (AUC) of 0.842. The fusion model achieves good outcome predictions even when only one of the modalities is used in testing, demonstrating its ability to learn better feature representations across different modalities during training. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Medical Image Processing)
Show Figures

Figure 1

Back to TopTop