Medical Image Classification and Segmentation: Progress and Challenges

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 9983

Special Issue Editors


E-Mail Website
Guest Editor
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
Interests: high-dimensional medical image intelligent interpretation; medical hyperspectral image processing; multimodal medical image fusion processing
School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
Interests: medical image analysis; computer vision; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging is one of the cornerstones of modern medical diagnostics. It originated in the field of radiology, and includes various technologies such as X-ray imaging, radionuclide imaging, ultrasound imaging, magnetic resonance imaging, optical imaging, mass spectrometry imaging, bioelectric/magnetic imaging, and electron microscopy imaging. In recent decades, with the rapid development of hyperspectral cameras and artificial intelligence, hyperspectral imaging (HSI) has become an emerging and promising medical auxiliary diagnostic technology.

Although medical imaging technology obtains a large amount of information that the human eye cannot perceive, it is still extremely challenging to effectively utilize information for auxiliary diagnosis and disease treatment. Some of the challenges include low image spatial or spectral resolution caused by imaging device limitations, small sample issues caused by missing clinical sample annotations, difficulty in extracting diagnostic information, class imbalance learning, multimodal learning, domain adaptation, etc.

Therefore, this Special Issue aims to collate papers that address the aforementioned challenges, and highlights the recent research findings and developments in the field of medical image classification and segmentation. We also welcome submissions of manuscripts on aspects closely related to the scope of this Special Issue (e.g., image registration, image reconstruction, feature extraction, feature selection, etc.).

Dr. Meng Lv
Dr. Xin Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 2D/3D image segmentation
  • image classification
  • image registration
  • image super-resolution
  • image reconstruction
  • image feature selection and extraction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

33 pages, 9039 KiB  
Article
Integrated Ultrasound Characterization of the Diet-Induced Obesity (DIO) Model in Young Adult c57bl/6j Mice: Assessment of Cardiovascular, Renal and Hepatic Changes
by Sara Gargiulo, Virginia Barone, Denise Bonente, Tiziana Tamborrino, Giovanni Inzalaco, Lisa Gherardini, Eugenio Bertelli and Mario Chiariello
J. Imaging 2024, 10(9), 217; https://doi.org/10.3390/jimaging10090217 - 4 Sep 2024
Viewed by 1229
Abstract
Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in [...] Read more.
Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in humans consuming a “Western-type” (WD) diet is of great interest. This study aims to characterize this animal model, using high-frequency ultrasound imaging (HFUS) as a complementary tool to longitudinally monitor changes in the liver, heart and kidney. Long-term WD feeding increased mice body weight (BW), liver/BW ratio and body condition score (BCS), transaminases, glucose and insulin, and caused dyslipidemia and insulin resistance. Echocardiography revealed subtle cardiac remodeling in WD-fed mice, highlighting a significant age–diet interaction for some left ventricular morphofunctional parameters. Qualitative and parametric HFUS analyses of the liver in WD-fed mice showed a progressive increase in echogenicity and echotexture heterogeneity, and equal or higher brightness of the renal cortex. Furthermore, renal circulation was impaired in WD-fed female mice. The ultrasound and histopathological findings were concordant. Overall, HFUS can improve the translational value of preclinical DIO models through an integrated approach with conventional methods, enabling a comprehensive identification of early stages of diseases in vivo and non-invasively, according to the 3Rs. Full article
Show Figures

Graphical abstract

17 pages, 11358 KiB  
Article
Fiduciary-Free Frame Alignment for Robust Time-Lapse Drift Correction Estimation in Multi-Sample Cell Microscopy
by Stefan Baar, Masahiro Kuragano, Naoki Nishishita, Kiyotaka Tokuraku and Shinya Watanabe
J. Imaging 2024, 10(8), 181; https://doi.org/10.3390/jimaging10080181 - 29 Jul 2024
Viewed by 1287
Abstract
When analyzing microscopic time-lapse observations, frame alignment is an essential task to visually understand the morphological and translation dynamics of cells and tissue. While in traditional single-sample microscopy, the region of interest (RoI) is fixed, multi-sample microscopy often uses a single microscope that [...] Read more.
When analyzing microscopic time-lapse observations, frame alignment is an essential task to visually understand the morphological and translation dynamics of cells and tissue. While in traditional single-sample microscopy, the region of interest (RoI) is fixed, multi-sample microscopy often uses a single microscope that scans multiple samples over a long period of time by laterally relocating the sample stage. Hence, the relocation of the optics induces a statistical RoI offset and can introduce jitter as well as drift, which results in a misaligned RoI for each sample’s time-lapse observation (stage drift). We introduce a robust approach to automatically align all frames within a time-lapse observation and compensate for frame drift. In this study, we present a sub-pixel precise alignment approach based on recurrent all-pairs field transforms (RAFT); a deep network architecture for optical flow. We show that the RAFT model pre-trained on the Sintel dataset performed with near perfect precision for registration tasks on a set of ten contextually unrelated time-lapse observations containing 250 frames each. Our approach is robust for elastically undistorted and translation displaced (x,y) microscopic time-lapse observations and was tested on multiple samples with varying cell density, obtained using different devices. The approach only performed well for registration and not for tracking of the individual image components like cells and contaminants. We provide an open-source command-line application that corrects for stage drift and jitter. Full article
Show Figures

Figure 1

18 pages, 2231 KiB  
Article
Reducing Manual Annotation Costs for Cell Segmentation by Upgrading Low-Quality Annotations
by Serban Vădineanu, Daniël M. Pelt, Oleh Dzyubachyk and Kees Joost Batenburg
J. Imaging 2024, 10(7), 172; https://doi.org/10.3390/jimaging10070172 - 17 Jul 2024
Cited by 1 | Viewed by 1089
Abstract
Deep-learning algorithms for cell segmentation typically require large data sets with high-quality annotations to be trained with. However, the annotation cost for obtaining such sets may prove to be prohibitively expensive. Our work aims to reduce the time necessary to create high-quality annotations [...] Read more.
Deep-learning algorithms for cell segmentation typically require large data sets with high-quality annotations to be trained with. However, the annotation cost for obtaining such sets may prove to be prohibitively expensive. Our work aims to reduce the time necessary to create high-quality annotations of cell images by using a relatively small well-annotated data set for training a convolutional neural network to upgrade lower-quality annotations, produced at lower annotation costs. We investigate the performance of our solution when upgrading the annotation quality for labels affected by three types of annotation error: omission, inclusion, and bias. We observe that our method can upgrade annotations affected by high error levels from 0.3 to 0.9 Dice similarity with the ground-truth annotations. We also show that a relatively small well-annotated set enlarged with samples with upgraded annotations can be used to train better-performing cell segmentation networks compared to training only on the well-annotated set. Moreover, we present a use case where our solution can be successfully employed to increase the quality of the predictions of a segmentation network trained on just 10 annotated samples. Full article
Show Figures

Figure 1

15 pages, 3271 KiB  
Article
A 2.5D Self-Training Strategy for Carotid Artery Segmentation in T1-Weighted Brain Magnetic Resonance Images
by Adriel Silva de Araújo, Márcio Sarroglia Pinho, Ana Maria Marques da Silva, Luis Felipe Fiorentini and Jefferson Becker
J. Imaging 2024, 10(7), 161; https://doi.org/10.3390/jimaging10070161 - 3 Jul 2024
Viewed by 1119
Abstract
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline [...] Read more.
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model’s performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results. Full article
Show Figures

Figure 1

29 pages, 101376 KiB  
Article
When Two Eyes Don’t Suffice—Learning Difficult Hyperfluorescence Segmentations in Retinal Fundus Autofluorescence Images via Ensemble Learning
by Monty Santarossa, Tebbo Tassilo Beyer, Amelie Bernadette Antonia Scharf, Ayse Tatli, Claus von der Burchard, Jakob Nazarenus, Johann Baptist Roider and Reinhard Koch
J. Imaging 2024, 10(5), 116; https://doi.org/10.3390/jimaging10050116 - 9 May 2024
Viewed by 1428
Abstract
Hyperfluorescence (HF) and reduced autofluorescence (RA) are important biomarkers in fundus autofluorescence images (FAF) for the assessment of health of the retinal pigment epithelium (RPE), an important indicator of disease progression in geographic atrophy (GA) or central serous chorioretinopathy (CSCR). Autofluorescence images have [...] Read more.
Hyperfluorescence (HF) and reduced autofluorescence (RA) are important biomarkers in fundus autofluorescence images (FAF) for the assessment of health of the retinal pigment epithelium (RPE), an important indicator of disease progression in geographic atrophy (GA) or central serous chorioretinopathy (CSCR). Autofluorescence images have been annotated by human raters, but distinguishing biomarkers (whether signals are increased or decreased) from the normal background proves challenging, with borders being particularly open to interpretation. Consequently, significant variations emerge among different graders, and even within the same grader during repeated annotations. Tests on in-house FAF data show that even highly skilled medical experts, despite previously discussing and settling on precise annotation guidelines, reach a pair-wise agreement measured in a Dice score of no more than 63–80% for HF segmentations and only 14–52% for RA. The data further show that the agreement of our primary annotation expert with herself is a 72% Dice score for HF and 51% for RA. Given these numbers, the task of automated HF and RA segmentation cannot simply be refined to the improvement in a segmentation score. Instead, we propose the use of a segmentation ensemble. Learning from images with a single annotation, the ensemble reaches expert-like performance with an agreement of a 64–81% Dice score for HF and 21–41% for RA with all our experts. In addition, utilizing the mean predictions of the ensemble networks and their variance, we devise ternary segmentations where FAF image areas are labeled either as confident background, confident HF, or potential HF, ensuring that predictions are reliable where they are confident (97% Precision), while detecting all instances of HF (99% Recall) annotated by all experts. Full article
Show Figures

Figure 1

14 pages, 8815 KiB  
Article
Evaluation of Non-Invasive Methods for (R)-[11C]PK11195 PET Image Quantification in Multiple Sclerosis
by Dimitri B. A. Mantovani, Milena S. Pitombeira, Phelipi N. Schuck, Adriel S. de Araújo, Carlos Alberto Buchpiguel, Daniele de Paula Faria and Ana Maria M. da Silva
J. Imaging 2024, 10(2), 39; https://doi.org/10.3390/jimaging10020039 - 31 Jan 2024
Viewed by 1923
Abstract
This study aims to evaluate non-invasive PET quantification methods for (R)-[11C]PK11195 uptake measurement in multiple sclerosis (MS) patients and healthy controls (HC) in comparison with arterial input function (AIF) using dynamic (R)-[11C]PK11195 PET and magnetic resonance images. The total [...] Read more.
This study aims to evaluate non-invasive PET quantification methods for (R)-[11C]PK11195 uptake measurement in multiple sclerosis (MS) patients and healthy controls (HC) in comparison with arterial input function (AIF) using dynamic (R)-[11C]PK11195 PET and magnetic resonance images. The total volume of distribution (VT) and distribution volume ratio (DVR) were measured in the gray matter, white matter, caudate nucleus, putamen, pallidum, thalamus, cerebellum, and brainstem using AIF, the image-derived input function (IDIF) from the carotid arteries, and pseudo-reference regions from supervised clustering analysis (SVCA). Uptake differences between MS and HC groups were tested using statistical tests adjusted for age and sex, and correlations between the results from the different quantification methods were also analyzed. Significant DVR differences were observed in the gray matter, white matter, putamen, pallidum, thalamus, and brainstem of MS patients when compared to the HC group. Also, strong correlations were found in DVR values between non-invasive methods and AIF (0.928 for IDIF and 0.975 for SVCA, p < 0.0001). On the other hand, (R)-[11C]PK11195 uptake could not be differentiated between MS patients and HC using VT values, and a weak correlation (0.356, p < 0.0001) was found between VTAIF and VTIDIF. Our study shows that the best alternative for AIF is using SVCA for reference region modeling, in addition to a cautious and appropriate methodology. Full article
Show Figures

Figure 1

Back to TopTop