Next Article in Journal
Excitonic Evolution in WS2/MoS2 van der Waals Heterostructures Turned by Out-of-Plane Localized Pressure
Previous Article in Journal
Anthracene Absorption and Concentration Dynamics in Radishes
Previous Article in Special Issue
Investigating Effective Data Augmentation Techniques for Accurate Gastric Classification in the Development of a Deep Learning-Based Computer-Aided Diagnosis System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Special Issue: Artificial Intelligence Technology in Medical Image Analysis

by
László Szilágyi
1,2,* and
Levente Kovács
1
1
Physiological Controls Research Center, Óbuda University, Bécsi út 96/B, 1034 Budapest, Hungary
2
Computational Intelligence Research Group, Sapientia Hungarian University of Transylvania, 540485 Tîrgu Mureș, Romania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(5), 2180; https://doi.org/10.3390/app14052180
Submission received: 27 February 2024 / Accepted: 28 February 2024 / Published: 5 March 2024
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)

1. Introduction

Artificial intelligence (AI) technologies have significantly advanced the field of medical imaging, revolutionizing diagnostic and therapeutic processes [1]. One notable application is in image interpretation, where machine learning algorithms have demonstrated exceptional capabilities in detecting and diagnosing various medical conditions from imaging data. AI-powered tools have proven particularly effective in the field of radiology, assisting healthcare professionals in identifying abnormalities in X-rays, MRIs (magnetic resonance imaging), and CT (computed tomography) scans with increased speed and accuracy [2,3,4,5]. These technologies can analyze vast amounts of medical imaging data much faster than their human counterparts, aiding in early detection and timely intervention.
Moreover, AI plays a crucial role in improving image quality and reducing noise in medical imaging. Image enhancement techniques driven by AI algorithms help produce clearer and more detailed images, leading to improved diagnostic precision [6]. These advancements contribute to better visualizations of anatomical structures and abnormalities, allowing healthcare professionals to make more informed decisions [7]. AI also facilitates the integration of multiple imaging modalities, enabling a comprehensive and holistic view of a patient’s condition, which is essential for personalized treatment planning [8,9,10].
In the realm of medical imaging, AI is instrumental in automating routine tasks, enabling healthcare providers to focus more on patient care. Automated image analysis can streamline the identification of patterns and anomalies, reducing the burden on radiologists and other medical professionals. This increased efficiency not only enhances the speed of diagnosis but also contributes to cost-effectiveness and resource optimization within healthcare systems [11].
Despite the transformative potential of AI in medical imaging, challenges such as data privacy, ethical considerations, and the need for regulatory frameworks remain. As these technologies continue to evolve, addressing these issues is paramount for ensuring their responsible and ethical implementation. The collaboration between healthcare professionals, data scientists, and regulatory bodies is essential in harnessing the full potential of AI in medical imaging, while maintaining patient trust and safeguarding sensitive health information. In the future, further developments in AI technologies are expected to refine and expand their applications in medical imaging, ultimately improving patient outcomes and advancing the overall landscape of healthcare diagnostics and treatment.

2. Contributions

This Special Issue represents a collection of original research papers that bring advancements to automated medical decision support, involving the diagnosis of various abnormal conditions of the human organism, based on a wide range of medical imaging modalities. In the first paper, Clement et al. present an ensemble that incorporates four different versions of deep convolutional neural networks (DCNNs), coupled with a support vector machine (SVM) classifier. The purpose is to classify histopathological images of breast cancer into eight specific classes—four benign and four malignant ones. The methodology harnesses the strengths of DCNNs to extract a highly predictive multi-scale pooled image feature representation, from breast cancer images at four distinct resolutions. These representations are then subjected to classification using SVM. The proposed CNN (convolutional neural network) architecture achieves an impressive 90% accuracy for eight-class breast cancer classification.
In the second paper, Usui et al. introduce a method for estimating age using T1-weighted two-dimensional brain magnetic resonance (MR) images, involving a cohort of 1000 subjects aged between 5 and 79 years. The proposed approach employs a regression model based on the ResNet-50 architecture, which predicts the chronological age of unknown brain MR images using its training on images corresponding to the respective ages. The study’s findings reveal a comparable level of correlation to related research, indicating the feasibility of age estimation across a broad age spectrum with enhanced accuracy. This suggests that the proposed method holds promise for accurate age estimation, even in diverse age groups, when utilizing T1-weighted two-dimensional brain MR imaging.
In the third paper, Oura et al. report on the development and evaluation of a quality assurance (QA) system for chest X-ray images, based on deep learning. The QA system includes three classification models and one regression model, developed using a dataset of 23,000 curated images from various sources. The classification method addresses issues such as image orientation correction, left–right reversal, and the estimation of the patient’s body position. The regression method focuses on correcting the image angle. Several CNN models are compared, using five cross-fold evaluations. The overall accuracy of the QA system is assessed using clinical images, and the mean correction time of the system is also measured. The deep learning-based QA system demonstrates efficient and accurate corrections of chest X-ray images. The utilization of ResNet-50 for classification and the classical CNN for regression proves to be effective, ensuring precise adjustments for various aspects of image quality.
In the fourth paper, Kim et al. focus on the development and evaluation of deep-learning algorithms designed to automatically recommend orthotic insoles for individuals experiencing foot pain. The input data, gathered from 838 patients, includes details such as the resting calcaneal stance position, pelvic elevation, pelvic tilt, and pelvic rotation. The target data encompasses the foot posture index for the modified root technique, as well as the requirement for interventions such as heel lift, entire lift, and specific supports like lateral wedge, medial wedge, and calcaneocuboid arch. The deep-learning models developed therein, coupled with statistical validation, demonstrate their proficiency in automatically prescribing orthotic insoles for patients with foot pain, achieving commendable accuracy across various parameters.
In the fifth paper, Kundrotas et al. explore various CNN architectures for their effective integration into the automated analysis of histopathological whole slide images. This advancement enhances the capability to detect early stages of diseases, including cancer, thereby contributing to improved health monitoring strategies. Moreover, the predictive abilities of these systems provide opportunities for anticipating and controlling the spread of global diseases, enabling preliminary analyses and viable solutions. The primary objective of the study is to augment the accuracy of machine learning methods in detecting tumor-damaged tissues in histopathological whole slide images. The most promising results were attained through the utilization of a multi-model ensemble, characterized by an impressive AUC (area under the curve) value of 0.97.
In the sixth paper, Pinheiro et al. introduce a foundational multimodality thalamus segmentation pipeline that incorporates diffusion MRI (dMRI) and T1-weighted images through a CNN approach, and can obtain remarkable segmentation accuracy. This marks a significant contribution to the automated diagnosis of various diseases like multiple sclerosis and Parkinson’s, where the segmentation and shape assessment of the thalamus play a crucial role. Additionally, the authors established an open benchmark, featuring a substantial, preprocessed, publicly available dataset. This dataset includes co-registered T1-weighted and dMRI images, manual thalamic masks, masks generated by three distinct automated methods, and a STAPLE consensus of the masks. As a result, this initiative opens up new possibilities for advancing research in the field.
In the seventh paper, Xiao and Lu, in response to the dependence of deep neural networks on the large amount of reliable image data, propose a semi-supervised framework for medical image classification that integrates semi-supervised classification with unsupervised deep clustering. The methodology involves the iterative execution of two tasks—semi-supervised classification and unsupervised deep clustering—to attach label information to unlabeled data. This iterative process assists the model in extracting semantic information from unlabeled data, while mitigating the risk of overfitting to the limited labeled data available. To validate their proposed method, the authors conduct a comparative experiment using public benchmark medical image datasets. Compared to earlier solutions, this method enhances model robustness and mitigates the impact of outliers.
In the eighth paper, Zhang et al. introduce a method for optimizing hyperparameters in deep networks employed for automated classification of histopathological images. This method is based on model fusion within the weight space. The authors utilize the cyclical learning rate strategy to fine-tune individual models and suggest a ranking strategy, considering accuracy and diversity, for the selection of candidate models. The fusion of weights from these constituent models is aimed at producing a model with performance levels better aligned with the expected value, potentially enhancing the model’s generalization ability. Furthermore, the proposed strategy significantly reduces the proposed model’s testing cost. Experiments conducted on two datasets of histopathological images reveal the effectiveness of the proposed model compared to baseline architectures like ResNet, VGG, DenseNet, and their ensemble versions.
In the ninth paper, Fachrel et al. propose a methodology for the effective and accurate classification of lung diseases, specifically targeting COVID-19, pneumonia, and normal cases, based on chest X-ray images. The research focuses on the optimal usage of CNN and LSTM architectures, from the perspective of evaluation metrics and training efficiency. Data augmentation is deployed to overcome the adverse effects of imbalanced datasets. The best performing model, featuring five convolutional blocks and two LSTM layers needs no augmentation and achieves remarkable Dice scores of up to 0.99.
In the tenth paper, Berchiolli et al. propose a segmentation method for the thoracic cavity in dynamic contrast-enhanced NRI data, to assist with the correct diagnosis of breast cancer. The main challenge is the fact that various internal organs can have very similar appearances to malignant breast lesions after the injection of the contrast agent. The proposed approach employs various CNN-based architectures, assessing the effect of several enhancements given to the baseline models. During the evaluation, the proposed methodology is proven to surpass the current state of the art, excelling in both data efficiency and conformity, with expert-made annotations.
Finally, in the eleventh paper, Park et al. address the problem of the accurate detection and effective treatment of gastric cancer. They propose an image-guided solution as an alternative to the usual gastroscopic biopsy, which is time-consuming and may cause delays in diagnosis and subsequent treatment. The proposed computer-aided diagnosis system, called CADx, employs various CNN-based network architectures to produce the diagnosis. Since there are no large image datasets available in this field, the authors proposed an image augmentation technique using cut-and-paste operations and a sliding window-based algorithm. In various classification scenarios, their system achieves an accuracy between 83% and 90%.

3. Conclusions

In conclusion, the integration of AI technology into medical image analysis has ushered in a transformative era in healthcare diagnostics and treatment planning. This Special Issue has explored the multifaceted applications and the profound impact of AI in revolutionizing the way medical professionals interpret and utilize imaging data. From enhancing the accuracy and efficiency of disease detection to facilitating personalized treatment strategies, AI has proven itself as a powerful ally in the medical field.
The rapid evolution of deep learning algorithms, particularly CNNs, has played a pivotal role in the success of AI in medical image analysis. These algorithms exhibit exceptional capabilities in recognizing complex patterns and anomalies within various medical imaging modalities, including X-rays, MRIs, and CT scans. The improved accuracy and speed of diagnosis offered by AI can not only expedite patient care but also contribute to more informed decision-making by healthcare practitioners.
Despite the remarkable strides made in this field, challenges remain, including the need for large, diverse datasets, the need to address ethical considerations, and the need to ensure the seamless integration of AI technologies into existing healthcare workflows. The ongoing collaboration between AI researchers, healthcare professionals, and regulatory bodies is crucial to overcoming these challenges and fostering the responsible and effective deployment of AI in medical image analysis.
As AI continues to evolve and mature, its role in medical image analysis is poised to redefine the landscape of healthcare. The promise of improved diagnostic accuracy, early disease detection, and personalized treatment strategies positions AI as an invaluable tool in advancing the quality of patient care and contributing to the overall well-being of individuals around the world.

Author Contributions

Conceptualization, L.S. and L.K.; writing—original draft preparation, L.S. and L.K.; writing—review and editing, L.S. and L.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Consolidator Researcher Program of Óbuda University.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Contributions

  • Clement, D.; Agu, E.; Suleiman, M.A.; Obayemi, J.; Adeshina, S.; Soboyejo, W. Multi-Class Breast Cancer Histopathological Image Classification Using Multi-Scale Pooled Image Feature Representation (MPIFR) and One-Versus-One Support Vector Machines. Appl. Sci. 2023, 13, 156. https://doi.org/10.3390/app13010156.
  • Usui, K.; Yoshimura, T.; Tang, M.H.; Sugimori, H. Age Estimation from Brain Magnetic Resonance Images Using Deep Learning Techniques in Extensive Age Range. Appl. Sci. 2023, 13, 1753. https://doi.org/10.3390/app13031753.
  • Oura, D.; Sato, S.; Honma, Y.; Kuwajima, S.; Sugimori, H. Quality Assurance of Chest X-ray Images with a Combination of Deep Learning Methods. Appl. Sci. 2023, 13, 2067. https://doi.org/10.3390/app13042067.
  • Kim, J.K.; Choo, Y.J.; Park, I.S.; Choi, J.W.; Park, D.; Chang, M.C. Deep-Learning Algorithms for Prescribing Insoles to Patients with Foot Pain. Appl. Sci. 2023, 13, 2208. https://doi.org/10.3390/app13042208.
  • Kundrotas, M., Mažonienė, E.; Šešok, D. Automatic Tumor Identification from Scans of Histopathological Tissues. Appl. Sci. 2023, 13, 4333. https://doi.org/10.3390/app13074333.
  • Pinheiro, G.R.; Brusini, L.; Carmo, D.; Prôa, R.; Abreu, T., Appenzeller, S.; Menegaz, G.; Rittner, L. Thalamus Segmentation Using Deep Learning with Diffusion MRI Data: An Open Benchmark. Appl. Sci. 2023, 13, 5284. https://doi.org/10.3390/app13095284.
  • Xiao, B.; Lu, C.Y. Semi-Supervised Medical Image Classification Combined with Unsupervised Deep Clustering. Appl. Sci. 2023, 13, 5520. https://doi.org/10.3390/app13095520.
  • Zhang, G.; Lai, Z.F.; Chen, Y.Q.; Liu, H.T.; Sun, W.J. A Histopathological Image Classification Method Based on Model Fusion in the Weight Space. Appl. Sci. 2023, 13, 7009. https://doi.org/10.3390/app13127009.
  • Fachrel, J.; Pravitasari, A.A.; Yulita, I.N.; Ardhisasmita, M.N.; Indrayatna, F. Enhancing an Imbalanced Lung Disease X-ray Image Classification with the CNN-LSTM Model. Appl. Sci. 2023, 13, 8227. https://doi.org/10.3390/app13148227.
  • Berchiolli, M.; Wolfram, S.; Balachandran, W.; Gan, T.H. Fully Automatic Thoracic Cavity Segmentation in Dynamic Contrast Enhanced Breast MRI Using Deep Convolutional Neural Networks. Appl. Sci. 2023, 13, 10160. https://doi.org/10.3390/app131810160.
  • Park, J.B.; Lee, H.S.; Cho, H.C. Investigating Effective Data Augmentation Techniques for Accurate Gastric Classification in the Development of a Deep Learning-Based Computer-Aided Diagnosis System. Appl. Sci. 2023, 13, 12325. https://doi.org/10.3390/app132212325.

References

  1. Potočnik, J.; Foley, S.; Thomas, E. Current and potential applications of artificial intelligence in medical imaging practice: A narrative review. J. Med. Imaging Radiat. Sci. 2023, 54, 376–385. [Google Scholar] [CrossRef] [PubMed]
  2. Messaoudi, H.; Belaid, A.; Ben Salem, D.; Conze, P.H. Cross-dimensional transfer learning in medical image segmentation with deep learning. Med. Image Anal. 2023, 88, 102868. [Google Scholar] [CrossRef] [PubMed]
  3. Billot, B.; Greve, D.N.; Puonti, O.; Thielscher, A.; Van Leemput, K.; Fischl, B.; Dalca, A.V.; Iglesias, J.E. SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Med. Image Anal. 2023, 86, 102789. [Google Scholar] [CrossRef]
  4. Szepesi, P.; Szilágyi, L. Detection of pneumonia using convolutional neural networks and deep learning. Biocybern. Biomed. Eng. 2022, 42, 1012–1022. [Google Scholar] [CrossRef]
  5. Prabhu, S.; Prasad, K.; Robels-Kelly, A.; Lu, X.Q. AI-based carcinoma detection and classification using histopathological images: A systematic review. Comput. Biol. Med. 2022, 142, 105209. [Google Scholar] [CrossRef] [PubMed]
  6. Kaviani, S.; Han, K.J.; Sohn, I. Adversarial attacks and defenses on AI in medical imaging informatics: A survey. Expert Syst. Appl. 2022, 198, 116815. [Google Scholar] [CrossRef]
  7. Hu, W.M.; Li, X.T.; Li, C.; Li, R.; Jiang, T.; Sun, H.Z.; Huang, X.N.; Grzegorzek, M.; Li, X.Y. A state-of-the-art survey of artificial neural networks for Whole-slide Image analysis: From popular Convolutional Neural Networks to potential visual transformers. Comput. Biol. Med. 2023, 161, 107034. [Google Scholar] [CrossRef] [PubMed]
  8. Li, J.J.; Han, X.; Qin, Y.M.; Tan, F.; Chen, Y.L.; Wang, Z.K.; Song, H.T.; Zhou, X.; Zhang, Y.; Hu, L.; et al. Artificial intelligence accelerates multi-modal biomedical process: A Survey. Neurocomputing 2023, 558, 126720. [Google Scholar] [CrossRef]
  9. Altini, N.; Prencipe, B.; Cascarano, G.D.; Brunetti, A.; Brunetti, G.; Triggiani, V.; Carnimeo, L.; Marino, F.; Guerriero, A.; Villani, L.; et al. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022, 490, 30–53. [Google Scholar] [CrossRef]
  10. Xu, Z.; Wang, Y.Q.; Chen, M.; Zhang, Q. Multi-region radiomics for artificially intelligent diagnosis of breast cancer using multimodal ultrasound. Comput. Biol. Med. 2022, 149, 105920. [Google Scholar] [CrossRef] [PubMed]
  11. Gao, L.; Moodie, M.; Watts, J.J.; Wang, L. Cost-Effectiveness of Osteoporosis Opportunistic Screening Using Computed Tomography in China. Value Health Reg. Issues 2023, 38, 38–44. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Szilágyi, L.; Kovács, L. Special Issue: Artificial Intelligence Technology in Medical Image Analysis. Appl. Sci. 2024, 14, 2180. https://doi.org/10.3390/app14052180

AMA Style

Szilágyi L, Kovács L. Special Issue: Artificial Intelligence Technology in Medical Image Analysis. Applied Sciences. 2024; 14(5):2180. https://doi.org/10.3390/app14052180

Chicago/Turabian Style

Szilágyi, László, and Levente Kovács. 2024. "Special Issue: Artificial Intelligence Technology in Medical Image Analysis" Applied Sciences 14, no. 5: 2180. https://doi.org/10.3390/app14052180

APA Style

Szilágyi, L., & Kovács, L. (2024). Special Issue: Artificial Intelligence Technology in Medical Image Analysis. Applied Sciences, 14(5), 2180. https://doi.org/10.3390/app14052180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop