Advances in Medical Imaging and Machine Learning

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: 31 August 2025 | Viewed by 4562

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Westminster, London W1W 7BY, UK
Interests: medical imaging; machine learning; computer-assisted interventions; cancer detection and diagnosis; ultrasound-guided procedures
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Westminster, London W1W 7BY, UK
Interests: medical image processing; 3D registration and reconstruction; image and video quality assessment
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue, entitled “Advances in Medical Imaging and Machine Learning”, aims to address the gap between medical imaging and cutting-edge machine learning algorithms. Medical imaging plays a crucial role in modern healthcare by enabling the non-invasive visualization and diagnosis of various diseases and conditions. However, the complexity of medical imaging data poses significant challenges regarding the accurate interpretation and analysis of data.

Machine learning techniques offer promising solutions to some of these challenges by automating the process of image analysis, enhancing the accuracy of diagnosis, and unlocking valuable insights from medical data. By leveraging algorithms that are capable of learning patterns and relationships within vast datasets, machine learning enables healthcare professionals to extract actionable information, predict the progression of disease, and personalize treatment strategies.

This Special Issue seeks to explore the intersection of medical imaging and machine learning, thus fostering interdisciplinary collaboration between researchers, clinicians, and engineers.

We would like to invite you to submit original research articles, reviews, and perspectives that showcase innovative approaches, practical applications, and critical insights in this rapidly evolving field.

Dr. Ester Bonmati Coll
Dr. Barbara Villarini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image pattern recognition
  • image classification
  • image segmentation
  • diagnosis
  • anomaly detection
  • genetics with medical imaging
  • machine learning
  • deep learning
  • research translation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 4901 KiB  
Article
A New Deep Learning-Based Method for Automated Identification of Thoracic Lymph Node Stations in Endobronchial Ultrasound (EBUS): A Proof-of-Concept Study
by Øyvind Ervik, Mia Rødde, Erlend Fagertun Hofstad, Ingrid Tveten, Thomas Langø, Håkon O. Leira, Tore Amundsen and Hanne Sorger
J. Imaging 2025, 11(1), 10; https://doi.org/10.3390/jimaging11010010 - 5 Jan 2025
Viewed by 706
Abstract
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a cornerstone in minimally invasive thoracic lymph node sampling. In lung cancer staging, precise assessment of lymph node position is crucial for clinical decision-making. This study aimed to demonstrate a new deep learning method to classify [...] Read more.
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a cornerstone in minimally invasive thoracic lymph node sampling. In lung cancer staging, precise assessment of lymph node position is crucial for clinical decision-making. This study aimed to demonstrate a new deep learning method to classify thoracic lymph nodes based on their anatomical location using EBUS images. Bronchoscopists labeled lymph node stations in real-time according to the Mountain Dressler nomenclature. EBUS images were then used to train and test a deep neural network (DNN) model, with intraoperative labels as ground truth. In total, 28,134 EBUS images were acquired from 56 patients. The model achieved an overall classification accuracy of 59.5 ± 5.2%. The highest precision, sensitivity, and F1 score were observed in station 4L, 77.6 ± 13.1%, 77.6 ± 15.4%, and 77.6 ± 15.4%, respectively. The lowest precision, sensitivity, and F1 score were observed in station 10L. The average processing and prediction time for a sequence of ten images was 0.65 ± 0.04 s, demonstrating the feasibility of real-time applications. In conclusion, the new DNN-based model could be used to classify lymph node stations from EBUS images. The method performance was promising with a potential for clinical use. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

17 pages, 2944 KiB  
Article
Enhanced CATBraTS for Brain Tumour Semantic Segmentation
by Rim El Badaoui, Ester Bonmati Coll, Alexandra Psarrou, Hykoush A. Asaturyan and Barbara Villarini
J. Imaging 2025, 11(1), 8; https://doi.org/10.3390/jimaging11010008 - 3 Jan 2025
Viewed by 435
Abstract
The early and precise identification of a brain tumour is imperative for enhancing a patient’s life expectancy; this can be facilitated by quick and efficient tumour segmentation in medical imaging. Automatic brain tumour segmentation tools in computer vision have integrated powerful deep learning [...] Read more.
The early and precise identification of a brain tumour is imperative for enhancing a patient’s life expectancy; this can be facilitated by quick and efficient tumour segmentation in medical imaging. Automatic brain tumour segmentation tools in computer vision have integrated powerful deep learning architectures to enable accurate tumour boundary delineation. Our study aims to demonstrate improved segmentation accuracy and higher statistical stability, using datasets obtained from diverse imaging acquisition parameters. This paper introduces a novel, fully automated model called Enhanced Channel Attention Transformer (E-CATBraTS) for Brain Tumour Semantic Segmentation; this model builds upon 3D CATBraTS, a vision transformer employed in magnetic resonance imaging (MRI) brain tumour segmentation tasks. E-CATBraTS integrates convolutional neural networks and Swin Transformer, incorporating channel shuffling and attention mechanisms to effectively segment brain tumours in multi-modal MRI. The model was evaluated on four datasets containing 3137 brain MRI scans. Through the adoption of E-CATBraTS, the accuracy of the results improved significantly on two datasets, outperforming the current state-of-the-art models by a mean DSC of 2.6% while maintaining a high accuracy that is comparable to the top-performing models on the other datasets. The results demonstrate that E-CATBraTS achieves both high segmentation accuracy and elevated generalisation abilities, ensuring the model is robust to dataset variation. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

22 pages, 15973 KiB  
Article
Three-Dimensional Bone-Image Synthesis with Generative Adversarial Networks
by Christoph Angermann, Johannes Bereiter-Payr, Kerstin Stock, Gerald Degenhart and Markus Haltmeier
J. Imaging 2024, 10(12), 318; https://doi.org/10.3390/jimaging10120318 - 11 Dec 2024
Viewed by 666
Abstract
Medical image processing has been highlighted as an area where deep-learning-based models have the greatest potential. However, in the medical field, in particular, problems of data availability and privacy are hampering research progress and, thus, rapid implementation in clinical routine. The generation of [...] Read more.
Medical image processing has been highlighted as an area where deep-learning-based models have the greatest potential. However, in the medical field, in particular, problems of data availability and privacy are hampering research progress and, thus, rapid implementation in clinical routine. The generation of synthetic data not only ensures privacy but also allows the drawing of new patients with specific characteristics, enabling the development of data-driven models on a much larger scale. This work demonstrates that three-dimensional generative adversarial networks (GANs) can be efficiently trained to generate high-resolution medical volumes with finely detailed voxel-based architectures. In addition, GAN inversion is successfully implemented for the three-dimensional setting and used for extensive research on model interpretability and applications such as image morphing, attribute editing, and style mixing. The results are comprehensively validated on a database of three-dimensional HR-pQCT instances representing the bone micro-architecture of the distal radius. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

19 pages, 785 KiB  
Article
Transformer Dil-DenseUnet: An Advanced Architecture for Stroke Segmentation
by Nesrine Jazzar, Besma Mabrouk and Ali Douik
J. Imaging 2024, 10(12), 304; https://doi.org/10.3390/jimaging10120304 - 25 Nov 2024
Viewed by 695
Abstract
We propose a novel architecture, Transformer Dil-DenseUNet, designed to address the challenges of accurately segmenting stroke lesions in MRI images. Precise segmentation is essential for diagnosing and treating stroke patients, as it provides critical spatial insights into the affected brain regions and the [...] Read more.
We propose a novel architecture, Transformer Dil-DenseUNet, designed to address the challenges of accurately segmenting stroke lesions in MRI images. Precise segmentation is essential for diagnosing and treating stroke patients, as it provides critical spatial insights into the affected brain regions and the extent of damage. Traditional manual segmentation is labor-intensive and error-prone, highlighting the need for automated solutions. Our Transformer Dil-DenseUNet combines DenseNet, dilated convolutions, and Transformer blocks, each contributing unique strengths to enhance segmentation accuracy. The DenseNet component captures fine-grained details and global features by leveraging dense connections, improving both precision and feature reuse. The dilated convolutional blocks, placed before each DenseNet module, expand the receptive field, capturing broader contextual information essential for accurate segmentation. Additionally, the Transformer blocks within our architecture address CNN limitations in capturing long-range dependencies by modeling complex spatial relationships through multi-head self-attention mechanisms. We assess our model’s performance on the Ischemic Stroke Lesion Segmentation Challenge 2015 (SISS 2015) and ISLES 2022 datasets. In the testing phase, the model achieves a Dice coefficient of 0.80 ± 0.30 on SISS 2015 and 0.81 ± 0.33 on ISLES 2022, surpassing the current state-of-the-art results on these datasets. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

12 pages, 2015 KiB  
Article
Automatic Segmentation of Mediastinal Lymph Nodes and Blood Vessels in Endobronchial Ultrasound (EBUS) Images Using Deep Learning
by Øyvind Ervik, Ingrid Tveten, Erlend Fagertun Hofstad, Thomas Langø, Håkon Olav Leira, Tore Amundsen and Hanne Sorger
J. Imaging 2024, 10(8), 190; https://doi.org/10.3390/jimaging10080190 - 6 Aug 2024
Cited by 1 | Viewed by 1533
Abstract
Endobronchial ultrasound (EBUS) is used in the minimally invasive sampling of thoracic lymph nodes. In lung cancer staging, the accurate assessment of mediastinal structures is essential but challenged by variations in anatomy, image quality, and operator-dependent image interpretation. This study aimed to automatically [...] Read more.
Endobronchial ultrasound (EBUS) is used in the minimally invasive sampling of thoracic lymph nodes. In lung cancer staging, the accurate assessment of mediastinal structures is essential but challenged by variations in anatomy, image quality, and operator-dependent image interpretation. This study aimed to automatically detect and segment mediastinal lymph nodes and blood vessels employing a novel U-Net architecture-based approach in EBUS images. A total of 1161 EBUS images from 40 patients were annotated. For training and validation, 882 images from 30 patients and 145 images from 5 patients were utilized. A separate set of 134 images was reserved for testing. For lymph node and blood vessel segmentation, the mean ± standard deviation (SD) values of the Dice similarity coefficient were 0.71 ± 0.35 and 0.76 ± 0.38, those of the precision were 0.69 ± 0.36 and 0.82 ± 0.22, those of the sensitivity were 0.71 ± 0.38 and 0.80 ± 0.25, those of the specificity were 0.98 ± 0.02 and 0.99 ± 0.01, and those of the F1 score were 0.85 ± 0.16 and 0.81 ± 0.21, respectively. The average processing and segmentation run-time per image was 55 ± 1 ms (mean ± SD). The new U-Net architecture-based approach (EBUS-AI) could automatically detect and segment mediastinal lymph nodes and blood vessels in EBUS images. The method performed well and was feasible and fast, enabling real-time automatic labeling. Full article
(This article belongs to the Special Issue Advances in Medical Imaging and Machine Learning)
Show Figures

Figure 1

Back to TopTop