Advances in Quantitative Imaging Analysis: From Theory to Practice

A special issue of BioMedInformatics (ISSN 2673-7426). This special issue belongs to the section "Imaging Informatics".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 18369

Special Issue Editors


E-Mail Website
Guest Editor
1. Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
2. Department of Translational Medicine, University of Piemonte Orientale (UPO), Via Solaroli 17, 28100 Novara, Italy
Interests: radiotherapy; artificial intelligence; machine learning; process mining; radiomics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
2. Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
Interests: radiology; abdominal radiology; thoracic radiology; radiomics; quantitative imaging analysis; machine learning

E-Mail Website
Guest Editor
Department of Radiotherapy, European Institute of Oncology (IEO) IRCCS, 20141 Milan, Italy
Interests: urological malignancies; radiation oncology; new fractionation protocols; treatment accuracy; patient’s quality of life; prognostic and predictive factors; SBRT hypofractionation; oligometastatic disease
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The special issue on Quantitative Analysis of Imaging seeks to bring together the latest innovations and advancements in the field of imaging analysis. With a focus on quantitative methods for the extraction of meaningful information from images, this special issue will cover a wide range of imaging modalities, including microscopy, medical imaging, and remote sensing.

The articles in this special issue will showcase the latest developments in the field of quantitative imaging analysis, covering both theoretical and practical aspects. Experts in the field will present their work on various topics, including image segmentation, registration, feature extraction, pattern recognition, and image-based modeling. The articles will highlight the challenges and opportunities in this rapidly evolving field, while promoting interdisciplinary collaboration among researchers from different fields.

The special issue on Quantitative Analysis of Imaging will provide a comprehensive overview of the current state of the art in quantitative imaging analysis, and aims to promote further research and development in this field. By presenting the latest advancements in imaging analysis, this special issue will serve as a valuable resource for researchers, practitioners, and students interested in this field.

Dr. Federico Mastroleo
Dr. Angela Ammirabile
Dr. Giulia Marvaso
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. BioMedInformatics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • quantitative analysis
  • imaging
  • image processing
  • computer vision
  • biomedical engineering
  • image segmentation
  • pattern recognition
  • radiomics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 2994 KiB  
Article
Abdominal MRI Unconditional Synthesis with Medical Assessment
by Bernardo Gonçalves, Mariana Silva, Luísa Vieira and Pedro Vieira
BioMedInformatics 2024, 4(2), 1506-1518; https://doi.org/10.3390/biomedinformatics4020082 - 7 Jun 2024
Viewed by 986
Abstract
Current computer vision models require a significant amount of annotated data to improve their performance in a particular task. However, obtaining the required annotated data is challenging, especially in medicine. Hence, data augmentation techniques play a crucial role. In recent years, generative models [...] Read more.
Current computer vision models require a significant amount of annotated data to improve their performance in a particular task. However, obtaining the required annotated data is challenging, especially in medicine. Hence, data augmentation techniques play a crucial role. In recent years, generative models have been used to create artificial medical images, which have shown promising results. This study aimed to use a state-of-the-art generative model, StyleGAN3, to generate realistic synthetic abdominal magnetic resonance images. These images will be evaluated using quantitative metrics and qualitative assessments by medical professionals. For this purpose, an abdominal MRI dataset acquired at Garcia da Horta Hospital in Almada, Portugal, was used. A subset containing only axial gadolinium-enhanced slices was used to train the model. The obtained Fréchet inception distance value (12.89) aligned with the state of the art, and a medical expert confirmed the significant realism and quality of the images. However, specific issues were identified in the generated images, such as texture variations, visual artefacts and anatomical inconsistencies. Despite these, this work demonstrated that StyleGAN3 is a viable solution to synthesise realistic medical imaging data, particularly in abdominal imaging. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Graphical abstract

26 pages, 5616 KiB  
Article
Advancing Early Leukemia Diagnostics: A Comprehensive Study Incorporating Image Processing and Transfer Learning
by Rezaul Haque, Abdullah Al Sakib, Md Forhad Hossain, Fahadul Islam, Ferdaus Ibne Aziz, Md Redwan Ahmed, Somasundar Kannan, Ali Rohan and Md Junayed Hasan
BioMedInformatics 2024, 4(2), 966-991; https://doi.org/10.3390/biomedinformatics4020054 - 1 Apr 2024
Viewed by 2196
Abstract
Disease recognition has been revolutionized by autonomous systems in the rapidly developing field of medical technology. A crucial aspect of diagnosis involves the visual assessment and enumeration of white blood cells in microscopic peripheral blood smears. This practice yields invaluable insights into a [...] Read more.
Disease recognition has been revolutionized by autonomous systems in the rapidly developing field of medical technology. A crucial aspect of diagnosis involves the visual assessment and enumeration of white blood cells in microscopic peripheral blood smears. This practice yields invaluable insights into a patient’s health, enabling the identification of conditions of blood malignancies such as leukemia. Early identification of leukemia subtypes is paramount for tailoring appropriate therapeutic interventions and enhancing patient survival rates. However, traditional diagnostic techniques, which depend on visual assessment, are arbitrary, laborious, and prone to errors. The advent of ML technologies offers a promising avenue for more accurate and efficient leukemia classification. In this study, we introduced a novel approach to leukemia classification by integrating advanced image processing, diverse dataset utilization, and sophisticated feature extraction techniques, coupled with the development of TL models. Focused on improving accuracy of previous studies, our approach utilized Kaggle datasets for binary and multiclass classifications. Extensive image processing involved a novel LoGMH method, complemented by diverse augmentation techniques. Feature extraction employed DCNN, with subsequent utilization of extracted features to train various ML and TL models. Rigorous evaluation using traditional metrics revealed Inception-ResNet’s superior performance, surpassing other models with F1 scores of 96.07% and 95.89% for binary and multiclass classification, respectively. Our results notably surpass previous research, particularly in cases involving a higher number of classes. These findings promise to influence clinical decision support systems, guide future research, and potentially revolutionize cancer diagnostics beyond leukemia, impacting broader medical imaging and oncology domains. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

22 pages, 869 KiB  
Article
Towards the Generation of Medical Imaging Classifiers Robust to Common Perturbations
by Joshua Chuah, Pingkun Yan, Ge Wang and Juergen Hahn
BioMedInformatics 2024, 4(2), 889-910; https://doi.org/10.3390/biomedinformatics4020050 - 1 Apr 2024
Viewed by 1117
Abstract
Background: Machine learning (ML) and artificial intelligence (AI)-based classifiers can be used to diagnose diseases from medical imaging data. However, few of the classifiers proposed in the literature translate to clinical use because of robustness concerns. Materials and methods: This study investigates how [...] Read more.
Background: Machine learning (ML) and artificial intelligence (AI)-based classifiers can be used to diagnose diseases from medical imaging data. However, few of the classifiers proposed in the literature translate to clinical use because of robustness concerns. Materials and methods: This study investigates how to improve the robustness of AI/ML imaging classifiers by simultaneously applying perturbations of common effects (Gaussian noise, contrast, blur, rotation, and tilt) to different amounts of training and test images. Furthermore, a comparison with classifiers trained with adversarial noise is also presented. This procedure is illustrated using two publicly available datasets, the PneumoniaMNIST dataset and the Breast Ultrasound Images dataset (BUSI dataset). Results: Classifiers trained with small amounts of perturbed training images showed similar performance on unperturbed test images compared to the classifier trained with no perturbations. Additionally, classifiers trained with perturbed data performed significantly better on test data both perturbed by a single perturbation (p-values: noise = 0.0186; contrast = 0.0420; rotation, tilt, and blur = 0.000977) and multiple perturbations (p-values: PneumoniaMNIST = 0.000977; BUSI = 0.00684) than the classifier trained with unperturbed data. Conclusions: Classifiers trained with perturbed data were found to be more robust to perturbed test data than the unperturbed classifier without exhibiting a performance decrease on unperturbed test images, indicating benefits to training with data that include some perturbed images and no significant downsides. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Graphical abstract

21 pages, 6951 KiB  
Article
Enhancing Brain Tumor Classification with Transfer Learning across Multiple Classes: An In-Depth Analysis
by Syed Ahmmed, Prajoy Podder, M. Rubaiyat Hossain Mondal, S M Atikur Rahman, Somasundar Kannan, Md Junayed Hasan, Ali Rohan and Alexander E. Prosvirin
BioMedInformatics 2023, 3(4), 1124-1144; https://doi.org/10.3390/biomedinformatics3040068 - 6 Dec 2023
Cited by 14 | Viewed by 3870
Abstract
This study focuses on leveraging data-driven techniques to diagnose brain tumors through magnetic resonance imaging (MRI) images. Utilizing the rule of deep learning (DL), we introduce and fine-tune two robust frameworks, ResNet 50 and Inception V3, specifically designed for the classification of brain [...] Read more.
This study focuses on leveraging data-driven techniques to diagnose brain tumors through magnetic resonance imaging (MRI) images. Utilizing the rule of deep learning (DL), we introduce and fine-tune two robust frameworks, ResNet 50 and Inception V3, specifically designed for the classification of brain MRI images. Building upon the previous success of ResNet 50 and Inception V3 in classifying other medical imaging datasets, our investigation encompasses datasets with distinct characteristics, including one with four classes and another with two. The primary contribution of our research lies in the meticulous curation of these paired datasets. We have also integrated essential techniques, including Early Stopping and ReduceLROnPlateau, to refine the model through hyperparameter optimization. This involved adding extra layers, experimenting with various loss functions and learning rates, and incorporating dropout layers and regularization to ensure model convergence in predictions. Furthermore, strategic enhancements, such as customized pooling and regularization layers, have significantly elevated the accuracy of our models, resulting in remarkable classification accuracy. Notably, the pairing of ResNet 50 with the Nadam optimizer yields extraordinary accuracy rates, reaching 99.34% for gliomas, 93.52% for meningiomas, 98.68% for non-tumorous images, and 97.70% for pituitary tumors. These results underscore the transformative potential of our custom-made approach, achieving an aggregate testing accuracy of 97.68% for these four distinct classes. In a two-class dataset, Resnet 50 with the Adam optimizer excels, demonstrating better precision, recall, F1 score, and an overall accuracy of 99.84%. Moreover, it attains perfect per-class accuracy of 99.62% for ‘Tumor Positive’ and 100% for ‘Tumor Negative’, underscoring a remarkable advancement in the realm of brain tumor categorization. This research underscores the innovative possibilities of DL models and our specialized optimization methods in the domain of diagnosing brain cancer from MRI images. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

14 pages, 953 KiB  
Article
Federated Learning for Diabetic Retinopathy Detection Using Vision Transformers
by Mohamed Chetoui and Moulay A. Akhloufi
BioMedInformatics 2023, 3(4), 948-961; https://doi.org/10.3390/biomedinformatics3040058 - 1 Nov 2023
Cited by 3 | Viewed by 3206
Abstract
A common consequence of diabetes mellitus called diabetic retinopathy (DR) results in lesions on the retina that impair vision. It can cause blindness if not detected in time. Unfortunately, DR cannot be reversed, and treatment simply keeps eyesight intact. The risk of vision [...] Read more.
A common consequence of diabetes mellitus called diabetic retinopathy (DR) results in lesions on the retina that impair vision. It can cause blindness if not detected in time. Unfortunately, DR cannot be reversed, and treatment simply keeps eyesight intact. The risk of vision loss can be considerably decreased with early detection and treatment of DR. Ophtalmologists must manually diagnose DR retinal fundus images, which takes time, effort, and is cost-consuming. It is also more prone to error than computer-aided diagnosis methods. Deep learning has recently become one of the methods used most frequently to improve performance in a variety of fields, including medical image analysis and classification. In this paper, we develop a federated learning approach to detect diabetic retinopathy using four distributed institutions in order to build a robust model. Our federated learning approach is based on Vision Transformer architecture to classify DR and Normal cases. Several performance measures were used such as accuracy, area under the curve (AUC), sensitivity and specificity. The results show an improvement of up to 3% in terms of accuracy with the proposed federated learning technique. The technique also resolving crucial issues like data security, data access rights, and data protection. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

21 pages, 643 KiB  
Article
Multimodal Deep Learning Methods on Image and Textual Data to Predict Radiotherapy Structure Names
by Priyankar Bose, Pratip Rana, William C. Sleeman IV, Sriram Srinivasan, Rishabh Kapoor, Jatinder Palta and Preetam Ghosh
BioMedInformatics 2023, 3(3), 493-513; https://doi.org/10.3390/biomedinformatics3030034 - 25 Jun 2023
Cited by 2 | Viewed by 2806
Abstract
Physicians often label anatomical structure sets in Digital Imaging and Communications in Medicine (DICOM) images with nonstandard random names. Hence, the standardization of these names for the Organs at Risk (OARs), Planning Target Volumes (PTVs), and ‘Other’ organs is a vital problem. This [...] Read more.
Physicians often label anatomical structure sets in Digital Imaging and Communications in Medicine (DICOM) images with nonstandard random names. Hence, the standardization of these names for the Organs at Risk (OARs), Planning Target Volumes (PTVs), and ‘Other’ organs is a vital problem. This paper presents novel deep learning methods on structure sets by integrating multimodal data compiled from the radiotherapy centers of the US Veterans Health Administration (VHA) and Virginia Commonwealth University (VCU). These de-identified data comprise 16,290 prostate structures. Our method integrates the multimodal textual and imaging data with Convolutional Neural Network (CNN)-based deep learning approaches such as CNN, Visual Geometry Group (VGG) network, and Residual Network (ResNet) and shows improved results in prostate radiotherapy structure name standardization. Evaluation with macro-averaged F1 score shows that our model with single-modal textual data usually performs better than previous studies. The models perform well on textual data alone, while the addition of imaging data shows that deep neural networks achieve better performance using information present in other modalities. Additionally, using masked images and masked doses along with text leads to an overall performance improvement with the CNN-based architectures than using all the modalities together. Undersampling the majority class leads to further performance enhancement. The VGG network on the masked image-dose data combined with CNNs on the text data performs the best and presents the state-of-the-art in this domain. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Graphical abstract

17 pages, 3456 KiB  
Article
Generation of Musculoskeletal Ultrasound Images with Diffusion Models
by Sofoklis Katakis, Nikolaos Barotsis, Alexandros Kakotaritis, Panagiotis Tsiganos, George Economou, Elias Panagiotopoulos and George Panayiotakis
BioMedInformatics 2023, 3(2), 405-421; https://doi.org/10.3390/biomedinformatics3020027 - 23 May 2023
Cited by 2 | Viewed by 2456
Abstract
The recent advances in deep learning have revolutionised computer-aided diagnosis in medical imaging. However, deep learning approaches to unveil their full potential require significant amounts of data, which can be a challenging task in some scientific fields, such as musculoskeletal ultrasound imaging, in [...] Read more.
The recent advances in deep learning have revolutionised computer-aided diagnosis in medical imaging. However, deep learning approaches to unveil their full potential require significant amounts of data, which can be a challenging task in some scientific fields, such as musculoskeletal ultrasound imaging, in which data privacy and security reasons can lead to important limitations in the acquisition and the distribution process of patients’ data. For this reason, different generative methods have been introduced to significantly reduce the required amount of real data by generating synthetic images, almost indistinguishable from the real ones. In this study, the power of the diffusion models is incorporated for the generation of realistic data from a small set of musculoskeletal ultrasound images in four different muscles. Afterwards, the similarity of the generated and real images is assessed with different types of qualitative and quantitative metrics that correspond well with human judgement. In particular, the histograms of pixel intensities of the two sets of images have demonstrated that the two distributions are statistically similar. Additionally, the well-established LPIPS, SSIM, FID, and PSNR metrics have been used to quantify the similarity of these sets of images. The two sets of images have achieved extremely high similarity scores in all these metrics. Subsequently, high-level features are extracted from the two types of images and visualized in a two-dimensional space for inspection of their structure and to identify patterns. From this representation, the two sets of images are hard to distinguish. Finally, we perform a series of experiments to assess the impact of the generated data for training a highly efficient Attention-UNet for the important clinical application of muscle thickness measurement. Our results depict that the synthetic data play a significant role in the model’s final performance and can lead to the improvement of the deep learning systems in musculoskeletal ultrasound. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1013 KiB  
Review
Part-Prototype Models in Medical Imaging: Applications and Current Challenges
by Lisa Anita De Santi, Franco Italo Piparo, Filippo Bargagna, Maria Filomena Santarelli, Simona Celi and Vincenzo Positano
BioMedInformatics 2024, 4(4), 2149-2172; https://doi.org/10.3390/biomedinformatics4040115 - 28 Oct 2024
Viewed by 544
Abstract
Recent developments in Artificial Intelligence have increasingly focused on explainability research. The potential of Explainable Artificial Intelligence (XAI) in producing trustworthy computer-aided diagnosis systems and its usage for knowledge discovery are gaining interest in the medical imaging (MI) community to support the diagnostic [...] Read more.
Recent developments in Artificial Intelligence have increasingly focused on explainability research. The potential of Explainable Artificial Intelligence (XAI) in producing trustworthy computer-aided diagnosis systems and its usage for knowledge discovery are gaining interest in the medical imaging (MI) community to support the diagnostic process and the discovery of image biomarkers. Most of the existing XAI applications in MI are focused on interpreting the predictions made using deep neural networks, typically including attribution techniques with saliency map approaches and other feature visualization methods. However, these are often criticized for providing incorrect and incomplete representations of the black-box models’ behaviour. This highlights the importance of proposing models intentionally designed to be self-explanatory. In particular, part-prototype (PP) models are interpretable-by-design computer vision (CV) models that base their decision process on learning and identifying representative prototypical parts from input images, and they are gaining increasing interest and results in MI applications. However, the medical field has unique characteristics that could benefit from more advanced implementations of these types of architectures. This narrative review summarizes existing PP networks, their application in MI analysis, and current challenges. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

Back to TopTop