Artificial Intelligence in Medical Image Processing and Segmentation

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 62069

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Experimental and Clinical Medicine, Magna Graecia University, 88100 Catanzaro, Italy
Interests: medical image processing; radiotherapy; image guided surgery; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Biomedical Engineering, Karlsruhe Institute of Technology (KIT), D-76131 Karlsruhe, Germany
Interests: radiation therapy; biomedical imaging; 3D image processing; biomedical engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, Artificial Intelligence (AI) has deeply revolutionized the field of medical image processing. In particular, image segmentation has been the task which mostly benefited from such an innovation.

This boost led to great advancements in the translation of AI algorithms from the laboratory to real clinical practice, in particular for computer-aided diagnosis and image-guided surgery.

As a result, the first medical devices relying on AI algorithms to treat or diagnose patients were recently introduced to the market.

Giant leaps have been achieved, but more has to be done.

We are pleased to invite you to submit your work to this Special Issue, which will focus on the cutting-edge developments of AI applied to the medical image field.

The journal will be accepting contributions (both original articles and reviews) mainly centered on the following topics:

  • Medical image segmentation;
  • AI-based medical image registration;
  • Medical image recognition;
  • Patient/treatment stratification based on AI image processing;
  • Synthetic medical image generation;
  • Image-guided surgery/radiotherapy based on AI;
  • Radiomics;
  • Explainable AI in medicine.

Dr. Paolo Zaffino
Prof. Dr. Maria Francesca Spadea
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image processing
  • image segmentation
  • computer-aided diagnosis
  • image guided surgery
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4032 KiB  
Article
Recognizing Pediatric Tuberous Sclerosis Complex Based on Multi-Contrast MRI and Deep Weighted Fusion Network
by Dian Jiang, Jianxiang Liao, Cailei Zhao, Xia Zhao, Rongbo Lin, Jun Yang, Zhi-Cheng Li, Yihang Zhou, Yanjie Zhu, Dong Liang, Zhanqi Hu and Haifeng Wang
Bioengineering 2023, 10(7), 870; https://doi.org/10.3390/bioengineering10070870 - 22 Jul 2023
Cited by 3 | Viewed by 2119
Abstract
Multi-contrast magnetic resonance imaging (MRI) is wildly applied to identify tuberous sclerosis complex (TSC) children in a clinic. In this work, a deep convolutional neural network with multi-contrast MRI is proposed to diagnose pediatric TSC. Firstly, by combining T2W and FLAIR images, a [...] Read more.
Multi-contrast magnetic resonance imaging (MRI) is wildly applied to identify tuberous sclerosis complex (TSC) children in a clinic. In this work, a deep convolutional neural network with multi-contrast MRI is proposed to diagnose pediatric TSC. Firstly, by combining T2W and FLAIR images, a new synthesis modality named FLAIR3 was created to enhance the contrast between TSC lesions and normal brain tissues. After that, a deep weighted fusion network (DWF-net) using a late fusion strategy is proposed to diagnose TSC children. In experiments, a total of 680 children were enrolled, including 331 healthy children and 349 TSC children. The experimental results indicate that FLAIR3 successfully enhances the visibility of TSC lesions and improves the classification performance. Additionally, the proposed DWF-net delivers a superior classification performance compared to previous methods, achieving an AUC of 0.998 and an accuracy of 0.985. The proposed method has the potential to be a reliable computer-aided diagnostic tool for assisting radiologists in diagnosing TSC children. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

17 pages, 17945 KiB  
Article
Mask-Transformer-Based Networks for Teeth Segmentation in Panoramic Radiographs
by Mehreen Kanwal, Muhammad Mutti Ur Rehman, Muhammad Umar Farooq and Dong-Kyu Chae
Bioengineering 2023, 10(7), 843; https://doi.org/10.3390/bioengineering10070843 - 17 Jul 2023
Cited by 7 | Viewed by 2370
Abstract
Teeth segmentation plays a pivotal role in dentistry by facilitating accurate diagnoses and aiding the development of effective treatment plans. While traditional methods have primarily focused on teeth segmentation, they often fail to consider the broader oral tissue context. This paper proposes a [...] Read more.
Teeth segmentation plays a pivotal role in dentistry by facilitating accurate diagnoses and aiding the development of effective treatment plans. While traditional methods have primarily focused on teeth segmentation, they often fail to consider the broader oral tissue context. This paper proposes a panoptic-segmentation-based method that combines the results of instance segmentation with semantic segmentation of the background. Particularly, we introduce a novel architecture for instance teeth segmentation that leverages a dual-path transformer-based network, integrated with a panoptic quality (PQ) loss function. The model directly predicts masks and their corresponding classes, with the PQ loss function streamlining the training process. Our proposed architecture features a dual-path transformer block that facilitates bi-directional communication between the pixel path CNN and the memory path. It also contains a stacked decoder block that aggregates multi-scale features across different decoding resolutions. The transformer block integrates pixel-to-memory feedback attention, pixel-to-pixel self-attention, and memory-to-pixel and memory-to-memory self-attention mechanisms. The output heads process features to predict mask classes, while the final mask is obtained by multiplying memory path and pixel path features. When applied to the UFBA-UESC Dental Image dataset, our model exhibits a substantial improvement in segmentation performance, surpassing existing state-of-the-art techniques in terms of performance and robustness. Our research signifies an essential step forward in teeth segmentation and contributes to a deeper understanding of oral structures. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Graphical abstract

16 pages, 1869 KiB  
Article
Multi-Stage Classification of Retinal OCT Using Multi-Scale Ensemble Deep Architecture
by Oluwatunmise Akinniyi, Md Mahmudur Rahman, Harpal Singh Sandhu, Ayman El-Baz and Fahmi Khalifa
Bioengineering 2023, 10(7), 823; https://doi.org/10.3390/bioengineering10070823 - 10 Jul 2023
Cited by 11 | Viewed by 2221
Abstract
Accurate noninvasive diagnosis of retinal disorders is required for appropriate treatment or precision medicine. This work proposes a multi-stage classification network built on a multi-scale (pyramidal) feature ensemble architecture for retinal image classification using optical coherence tomography (OCT) images. First, a scale-adaptive neural [...] Read more.
Accurate noninvasive diagnosis of retinal disorders is required for appropriate treatment or precision medicine. This work proposes a multi-stage classification network built on a multi-scale (pyramidal) feature ensemble architecture for retinal image classification using optical coherence tomography (OCT) images. First, a scale-adaptive neural network is developed to produce multi-scale inputs for feature extraction and ensemble learning. The larger input sizes yield more global information, while the smaller input sizes focus on local details. Then, a feature-rich pyramidal architecture is designed to extract multi-scale features as inputs using DenseNet as the backbone. The advantage of the hierarchical structure is that it allows the system to extract multi-scale, information-rich features for the accurate classification of retinal disorders. Evaluation on two public OCT datasets containing normal and abnormal retinas (e.g., diabetic macular edema (DME), choroidal neovascularization (CNV), age-related macular degeneration (AMD), and Drusen) and comparison against recent networks demonstrates the advantages of the proposed architecture’s ability to produce feature-rich classification with average accuracy of 97.78%, 96.83%, and 94.26% for the first (binary) stage, second (three-class) stage, and all-at-once (four-class) classification, respectively, using cross-validation experiments using the first dataset. In the second dataset, our system showed an overall accuracy, sensitivity, and specificity of 99.69%, 99.71%, and 99.87%, respectively. Overall, the tangible advantages of the proposed network for enhanced feature learning might be used in various medical image classification tasks where scale-invariant features are crucial for precise diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

18 pages, 8955 KiB  
Article
Automated Prediction of Osteoarthritis Level in Human Osteochondral Tissue Using Histopathological Images
by Ateka Khader and Hiam Alquran
Bioengineering 2023, 10(7), 764; https://doi.org/10.3390/bioengineering10070764 - 25 Jun 2023
Cited by 3 | Viewed by 1451
Abstract
Osteoarthritis (OA) is the most common arthritis and the leading cause of lower extremity disability in older adults. Understanding OA progression is important in the development of patient-specific therapeutic techniques at the early stage of OA rather than at the end stage. Histopathology [...] Read more.
Osteoarthritis (OA) is the most common arthritis and the leading cause of lower extremity disability in older adults. Understanding OA progression is important in the development of patient-specific therapeutic techniques at the early stage of OA rather than at the end stage. Histopathology scoring systems are usually used to evaluate OA progress and the mechanisms involved in the development of OA. This study aims to classify the histopathological images of cartilage specimens automatically, using artificial intelligence algorithms. Hematoxylin and eosin (HE)- and safranin O and fast green (SafO)-stained images of human cartilage specimens were divided into early, mild, moderate, and severe OA. Five pre-trained convolutional networks (DarkNet-19, MobileNet, ResNet-101, NasNet) were utilized to extract the twenty features from the last fully connected layers for both scenarios of SafO and HE. Principal component analysis (PCA) and ant lion optimization (ALO) were utilized to obtain the best-weighted features. The support vector machine classifier was trained and tested based on the selected descriptors to achieve the highest accuracies of 98.04% and 97.03% in HE and SafO, respectively. Using the ALO algorithm, the F1 scores were 0.97, 0.991, 1, and 1 for the HE images and 1, 0.991, 0.97, and 1 for the SafO images for the early, mild, moderate, and severe classes, respectively. This algorithm may be a useful tool for researchers to evaluate the histopathological images of OA without the need for experts in histopathology scoring systems or the need to train new experts. Incorporating automated deep features could help to improve the characterization and understanding of OA progression and development. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

17 pages, 7319 KiB  
Article
Comparison of Artificial Intelligence-Based Applications for Mandible Segmentation: From Established Platforms to In-House-Developed Software
by Robert R. Ileșan, Michel Beyer, Christoph Kunz and Florian M. Thieringer
Bioengineering 2023, 10(5), 604; https://doi.org/10.3390/bioengineering10050604 - 17 May 2023
Cited by 13 | Viewed by 3062
Abstract
Medical image segmentation, whether semi-automatically or manually, is labor-intensive, subjective, and needs specialized personnel. The fully automated segmentation process recently gained importance due to its better design and understanding of CNNs. Considering this, we decided to develop our in-house segmentation software and compare [...] Read more.
Medical image segmentation, whether semi-automatically or manually, is labor-intensive, subjective, and needs specialized personnel. The fully automated segmentation process recently gained importance due to its better design and understanding of CNNs. Considering this, we decided to develop our in-house segmentation software and compare it to the systems of established companies, an inexperienced user, and an expert as ground truth. The companies included in the study have a cloud-based option that performs accurately in clinical routine (dice similarity coefficient of 0.912 to 0.949) with an average segmentation time ranging from 3′54″ to 85′54″. Our in-house model achieved an accuracy of 94.24% compared to the best-performing software and had the shortest mean segmentation time of 2′03″. During the study, developing in-house segmentation software gave us a glimpse into the strenuous work that companies face when offering clinically relevant solutions. All the problems encountered were discussed with the companies and solved, so both parties benefited from this experience. In doing so, we demonstrated that fully automated segmentation needs further research and collaboration between academics and the private sector to achieve full acceptance in clinical routines. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

12 pages, 5918 KiB  
Article
A Lightweight Deep Learning Network on a System-on-Chip for Wearable Ultrasound Bladder Volume Measurement Systems: Preliminary Study
by Hyunwoo Cho, Ilseob Song, Jihun Jang and Yangmo Yoo
Bioengineering 2023, 10(5), 525; https://doi.org/10.3390/bioengineering10050525 - 26 Apr 2023
Cited by 8 | Viewed by 2545
Abstract
Bladder volume assessments are crucial for managing urinary disorders. Ultrasound imaging (US) is a preferred noninvasive, cost-effective imaging modality for bladder observation and volume measurements. However, the high operator dependency of US is a major challenge due to the difficulty in evaluating ultrasound [...] Read more.
Bladder volume assessments are crucial for managing urinary disorders. Ultrasound imaging (US) is a preferred noninvasive, cost-effective imaging modality for bladder observation and volume measurements. However, the high operator dependency of US is a major challenge due to the difficulty in evaluating ultrasound images without professional expertise. To address this issue, image-based automatic bladder volume estimation methods have been introduced, but most conventional methods require high-complexity computing resources that are not available in point-of-care (POC) settings. Therefore, in this study, a deep learning-based bladder volume measurement system was developed for POC settings using a lightweight convolutional neural network (CNN)-based segmentation model, which was optimized on a low-resource system-on-chip (SoC) to detect and segment the bladder region in ultrasound images in real time. The proposed model achieved high accuracy and robustness and can be executed on the low-resource SoC at 7.93 frames per second, which is 13.44 times faster than the frame rate of a conventional network with negligible accuracy drawbacks (0.004 of the Dice coefficient). The feasibility of the developed lightweight deep learning network was demonstrated using tissue-mimicking phantoms. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

19 pages, 7709 KiB  
Article
U-Net Architecture for Prostate Segmentation: The Impact of Loss Function on System Performance
by Maryam Montazerolghaem, Yu Sun, Giuseppe Sasso and Annette Haworth
Bioengineering 2023, 10(4), 412; https://doi.org/10.3390/bioengineering10040412 - 26 Mar 2023
Cited by 11 | Viewed by 3557
Abstract
Segmentation of the prostate gland from magnetic resonance images is rapidly becoming a standard of care in prostate cancer radiotherapy treatment planning. Automating this process has the potential to improve accuracy and efficiency. However, the performance and accuracy of deep learning models varies [...] Read more.
Segmentation of the prostate gland from magnetic resonance images is rapidly becoming a standard of care in prostate cancer radiotherapy treatment planning. Automating this process has the potential to improve accuracy and efficiency. However, the performance and accuracy of deep learning models varies depending on the design and optimal tuning of the hyper-parameters. In this study, we examine the effect of loss functions on the performance of deep-learning-based prostate segmentation models. A U-Net model for prostate segmentation using T2-weighted images from a local dataset was trained and performance compared when using nine different loss functions, including: Binary Cross-Entropy (BCE), Intersection over Union (IoU), Dice, BCE and Dice (BCE + Dice), weighted BCE and Dice (W (BCE + Dice)), Focal, Tversky, Focal Tversky, and Surface loss functions. Model outputs were compared using several metrics on a five-fold cross-validation set. Ranking of model performance was found to be dependent on the metric used to measure performance, but in general, W (BCE + Dice) and Focal Tversky performed well for all metrics (whole gland Dice similarity coefficient (DSC): 0.71 and 0.74; 95HD: 6.66 and 7.42; Ravid 0.05 and 0.18, respectively) and Surface loss generally ranked lowest (DSC: 0.40; 95HD: 13.64; Ravid −0.09). When comparing the performance of the models for the mid-gland, apex, and base parts of the prostate gland, the models’ performance was lower for the apex and base compared to the mid-gland. In conclusion, we have demonstrated that the performance of a deep learning model for prostate segmentation can be affected by choice of loss function. For prostate segmentation, it would appear that compound loss functions generally outperform singles loss functions such as Surface loss. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

17 pages, 999 KiB  
Article
Radiomics-Based Machine Learning Model for Predicting Overall and Progression-Free Survival in Rare Cancer: A Case Study for Primary CNS Lymphoma Patients
by Michela Destito, Aldo Marzullo, Riccardo Leone, Paolo Zaffino, Sara Steffanoni, Federico Erbella, Francesco Calimeri, Nicoletta Anzalone, Elena De Momi, Andrés J. M. Ferreri, Teresa Calimeri and Maria Francesca Spadea
Bioengineering 2023, 10(3), 285; https://doi.org/10.3390/bioengineering10030285 - 22 Feb 2023
Cited by 11 | Viewed by 3097
Abstract
Primary Central Nervous System Lymphoma (PCNSL) is an aggressive neoplasm with a poor prognosis. Although therapeutic progresses have significantly improved Overall Survival (OS), a number of patients do not respond to HD–MTX-based chemotherapy (15–25%) or experience relapse (25–50%) after an initial response. The [...] Read more.
Primary Central Nervous System Lymphoma (PCNSL) is an aggressive neoplasm with a poor prognosis. Although therapeutic progresses have significantly improved Overall Survival (OS), a number of patients do not respond to HD–MTX-based chemotherapy (15–25%) or experience relapse (25–50%) after an initial response. The reasons underlying this poor response to therapy are unknown. Thus, there is an urgent need to develop improved predictive models for PCNSL. In this study, we investigated whether radiomics features can improve outcome prediction in patients with PCNSL. A total of 80 patients diagnosed with PCNSL were enrolled. A patient sub-group, with complete Magnetic Resonance Imaging (MRI) series, were selected for the stratification analysis. Following radiomics feature extraction and selection, different Machine Learning (ML) models were tested for OS and Progression-free Survival (PFS) prediction. To assess the stability of the selected features, images from 23 patients scanned at three different time points were used to compute the Interclass Correlation Coefficient (ICC) and to evaluate the reproducibility of each feature for both original and normalized images. Features extracted from Z-score normalized images were significantly more stable than those extracted from non-normalized images with an improvement of about 38% on average (p-value < 1012). The area under the ROC curve (AUC) showed that radiomics-based prediction overcame prediction based on current clinical prognostic factors with an improvement of 23% for OS and 50% for PFS, respectively. These results indicate that radiomics features extracted from normalized MR images can improve prognosis stratification of PCNSL patients and pave the way for further study on its potential role to drive treatment choice. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

15 pages, 2214 KiB  
Article
Synthetic CT in Carbon Ion Radiotherapy of the Abdominal Site
by Giovanni Parrella, Alessandro Vai, Anestis Nakas, Noemi Garau, Giorgia Meschini, Francesca Camagni, Silvia Molinelli, Amelia Barcellini, Andrea Pella, Mario Ciocca, Viviana Vitolo, Ester Orlandi, Chiara Paganelli and Guido Baroni
Bioengineering 2023, 10(2), 250; https://doi.org/10.3390/bioengineering10020250 - 14 Feb 2023
Cited by 12 | Viewed by 2032
Abstract
The generation of synthetic CT for carbon ion radiotherapy (CIRT) applications is challenging, since high accuracy is required in treatment planning and delivery, especially in an anatomical site as complex as the abdomen. Thirty-nine abdominal MRI-CT volume pairs were collected and a three-channel [...] Read more.
The generation of synthetic CT for carbon ion radiotherapy (CIRT) applications is challenging, since high accuracy is required in treatment planning and delivery, especially in an anatomical site as complex as the abdomen. Thirty-nine abdominal MRI-CT volume pairs were collected and a three-channel cGAN (accounting for air, bones, soft tissues) was used to generate sCTs. The network was tested on five held-out MRI volumes for two scenarios: (i) a CT-based segmentation of the MRI channels, to assess the quality of sCTs and (ii) an MRI manual segmentation, to simulate an MRI-only treatment scenario. The sCTs were evaluated by means of similarity metrics (e.g., mean absolute error, MAE) and geometrical criteria (e.g., dice coefficient). Recalculated CIRT plans were evaluated through dose volume histogram, gamma analysis and range shift analysis. The CT-based test set presented optimal MAE on bones (86.03 ± 10.76 HU), soft tissues (55.39 ± 3.41 HU) and air (54.42 ± 11.48 HU). Higher values were obtained from the MRI-only test set (MAEBONE = 154.87 ± 22.90 HU). The global gamma pass rate reached 94.88 ± 4.9% with 3%/3 mm, while the range shift reached a median (IQR) of 0.98 (3.64) mm. The three-channel cGAN can generate acceptable abdominal sCTs and allow for CIRT dose recalculations comparable to the clinical plans. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

19 pages, 3222 KiB  
Article
Improving the Segmentation Accuracy of Ovarian-Tumor Ultrasound Images Using Image Inpainting
by Lijiang Chen, Changkun Qiao, Meijing Wu, Linghan Cai, Cong Yin, Mukun Yang, Xiubo Sang and Wenpei Bai
Bioengineering 2023, 10(2), 184; https://doi.org/10.3390/bioengineering10020184 - 1 Feb 2023
Cited by 9 | Viewed by 4090
Abstract
Diagnostic results can be radically influenced by the quality of 2D ovarian-tumor ultrasound images. However, clinically processed 2D ovarian-tumor ultrasound images contain many artificially recognized symbols, such as fingers, crosses, dashed lines, and letters which assist artificial intelligence (AI) in image recognition. These [...] Read more.
Diagnostic results can be radically influenced by the quality of 2D ovarian-tumor ultrasound images. However, clinically processed 2D ovarian-tumor ultrasound images contain many artificially recognized symbols, such as fingers, crosses, dashed lines, and letters which assist artificial intelligence (AI) in image recognition. These symbols are widely distributed within the lesion’s boundary, which can also affect the useful feature-extraction-utilizing networks and thus decrease the accuracy of lesion classification and segmentation. Image inpainting techniques are used for noise and object elimination from images. To solve this problem, we observed the MMOTU dataset and built a 2D ovarian-tumor ultrasound image inpainting dataset by finely annotating the various symbols in the images. A novel framework called mask-guided generative adversarial network (MGGAN) is presented in this paper for 2D ovarian-tumor ultrasound images to remove various symbols from the images. The MGGAN performs to a high standard in corrupted regions by using an attention mechanism in the generator to pay more attention to valid information and ignore symbol information, making lesion boundaries more realistic. Moreover, fast Fourier convolutions (FFCs) and residual networks are used to increase the global field of perception; thus, our model can be applied to high-resolution ultrasound images. The greatest benefit of this algorithm is that it achieves pixel-level inpainting of distorted regions without clean images. Compared with other models, our model achieveed better results with only one stage in terms of objective and subjective evaluations. Our model obtained the best results for 256 × 256 and 512 × 512 resolutions. At a resolution of 256 × 256, our model achieved 0.9246 for SSIM, 22.66 for FID, and 0.07806 for LPIPS. At a resolution of 512 × 512, our model achieved 0.9208 for SSIM, 25.52 for FID, and 0.08300 for LPIPS. Our method can considerably improve the accuracy of computerized ovarian tumor diagnosis. The segmentation accuracy was improved from 71.51% to 76.06% for the Unet model and from 61.13% to 66.65% for the PSPnet model in clean images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Graphical abstract

12 pages, 2135 KiB  
Article
Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation
by Arman Avesta, Sajid Hossain, MingDe Lin, Mariam Aboian, Harlan M. Krumholz and Sanjay Aneja
Bioengineering 2023, 10(2), 181; https://doi.org/10.3390/bioengineering10020181 - 1 Feb 2023
Cited by 27 | Viewed by 4999
Abstract
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared [...] Read more.
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Graphical abstract

24 pages, 5292 KiB  
Article
Enhancement Technique Based on the Breast Density Level for Mammogram for Computer-Aided Diagnosis
by Noor Fadzilah Razali, Iza Sazanita Isa, Siti Noraini Sulaiman, Noor Khairiah Abdul Karim, Muhammad Khusairi Osman and Zainal Hisham Che Soh
Bioengineering 2023, 10(2), 153; https://doi.org/10.3390/bioengineering10020153 - 23 Jan 2023
Cited by 8 | Viewed by 2493
Abstract
Mass detection in mammograms has a limited approach to the presence of a mass in overlapping denser fibroglandular breast regions. In addition, various breast density levels could decrease the learning system’s ability to extract sufficient feature descriptors and may result in lower accuracy [...] Read more.
Mass detection in mammograms has a limited approach to the presence of a mass in overlapping denser fibroglandular breast regions. In addition, various breast density levels could decrease the learning system’s ability to extract sufficient feature descriptors and may result in lower accuracy performance. Therefore, this study is proposing a textural-based image enhancement technique named Spatial-based Breast Density Enhancement for Mass Detection (SbBDEM) to boost textural features of the overlapped mass region based on the breast density level. This approach determines the optimal exposure threshold of the images’ lower contrast limit and optimizes the parameters by selecting the best intensity factor guided by the best Blind/Reference-less Image Spatial Quality Evaluator (BRISQUE) scores separately for both dense and non-dense breast classes prior to training. Meanwhile, a modified You Only Look Once v3 (YOLOv3) architecture is employed for mass detection by specifically assigning an extra number of higher-valued anchor boxes to the shallower detection head using the enhanced image. The experimental results show that the use of SbBDEM prior to training mass detection promotes superior performance with an increase in mean Average Precision (mAP) of 17.24% improvement over the non-enhanced trained image for mass detection, mass segmentation of 94.41% accuracy, and 96% accuracy for benign and malignant mass classification. Enhancing the mammogram images based on breast density is proven to increase the overall system’s performance and can aid in an improved clinical diagnosis process. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

13 pages, 16452 KiB  
Article
2D/3D Non-Rigid Image Registration via Two Orthogonal X-ray Projection Images for Lung Tumor Tracking
by Guoya Dong, Jingjing Dai, Na Li, Chulong Zhang, Wenfeng He, Lin Liu, Yinping Chan, Yunhui Li, Yaoqin Xie and Xiaokun Liang
Bioengineering 2023, 10(2), 144; https://doi.org/10.3390/bioengineering10020144 - 21 Jan 2023
Cited by 14 | Viewed by 4799
Abstract
Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve [...] Read more.
Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Graphical abstract

19 pages, 3359 KiB  
Article
Deep Feature Engineering in Colposcopy Image Recognition: A Comparative Study
by Shefa Tawalbeh, Hiam Alquran and Mohammed Alsalatie
Bioengineering 2023, 10(1), 105; https://doi.org/10.3390/bioengineering10010105 - 12 Jan 2023
Cited by 7 | Viewed by 2404
Abstract
Feature fusion techniques have been proposed and tested for many medical applications to improve diagnostic and classification problems. Specifically, cervical cancer classification can be improved by using such techniques. Feature fusion combines information from different datasets into a single dataset. This dataset contains [...] Read more.
Feature fusion techniques have been proposed and tested for many medical applications to improve diagnostic and classification problems. Specifically, cervical cancer classification can be improved by using such techniques. Feature fusion combines information from different datasets into a single dataset. This dataset contains superior discriminant power that can improve classification accuracy. In this paper, we conduct comparisons among six selected feature fusion techniques to provide the best possible classification accuracy of cervical cancer. The considered techniques are canonical correlation analysis, discriminant correlation analysis, least absolute shrinkage and selection operator, independent component analysis, principal component analysis, and concatenation. We generate ten feature datasets that come from the transfer learning of the most popular pre-trained deep learning models: Alex net, Resnet 18, Resnet 50, Resnet 10, Mobilenet, Shufflenet, Xception, Nasnet, Darknet 19, and VGG Net 16. The main contribution of this paper is to combine these models and then apply them to the six feature fusion techniques to discriminate various classes of cervical cancer. The obtained results are then fed into a support vector machine model to classify four cervical cancer classes (i.e., Negative, HISL, LSIL, and SCC). It has been found that the considered six techniques demand relatively comparable computational complexity when they are run on the same machine. However, the canonical correlation analysis has provided the best performance in classification accuracy among the six considered techniques, at 99.7%. The second-best methods were the independent component analysis, least absolute shrinkage and the selection operator, which were found to have a 98.3% accuracy. On the other hand, the worst-performing technique was the principal component analysis technique, which offered 90% accuracy. Our developed approach of analysis can be applied to other medical diagnosis classification problems, which may demand the reduction of feature dimensions as well as a further enhancement of classification performance. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

16 pages, 7091 KiB  
Article
Artificial Hummingbird Algorithm with Transfer-Learning-Based Mitotic Nuclei Classification on Histopathologic Breast Cancer Images
by Areej A. Malibari, Marwa Obayya, Abdulbaset Gaddah, Amal S. Mehanna, Manar Ahmed Hamza, Mohamed Ibrahim Alsaid, Ishfaq Yaseen and Amgad Atta Abdelmageed
Bioengineering 2023, 10(1), 87; https://doi.org/10.3390/bioengineering10010087 - 9 Jan 2023
Cited by 7 | Viewed by 2170
Abstract
Recently, artificial intelligence (AI) is an extremely revolutionized domain of medical image processing. Specifically, image segmentation is a task that generally aids in such an improvement. This boost performs great developments in the conversion of AI approaches in the research lab to real [...] Read more.
Recently, artificial intelligence (AI) is an extremely revolutionized domain of medical image processing. Specifically, image segmentation is a task that generally aids in such an improvement. This boost performs great developments in the conversion of AI approaches in the research lab to real medical applications, particularly for computer-aided diagnosis (CAD) and image-guided operation. Mitotic nuclei estimates in breast cancer instances have a prognostic impact on diagnosis of cancer aggressiveness and grading methods. The automated analysis of mitotic nuclei is difficult due to its high similarity with nonmitotic nuclei and heteromorphic form. This study designs an artificial hummingbird algorithm with transfer-learning-based mitotic nuclei classification (AHBATL-MNC) on histopathologic breast cancer images. The goal of the AHBATL-MNC technique lies in the identification of mitotic and nonmitotic nuclei on histopathology images (HIs). For HI segmentation process, the PSPNet model is utilized to identify the candidate mitotic patches. Next, the residual network (ResNet) model is employed as feature extractor, and extreme gradient boosting (XGBoost) model is applied as a classifier. To enhance the classification performance, the parameter tuning of the XGBoost model takes place by making use of the AHBA approach. The simulation values of the AHBATL-MNC system are tested on medical imaging datasets and the outcomes are investigated in distinct measures. The simulation values demonstrate the enhanced outcomes of the AHBATL-MNC method compared to other current approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

16 pages, 3275 KiB  
Article
A High-Accuracy Detection System: Based on Transfer Learning for Apical Lesions on Periapical Radiograph
by Yueh Chuo, Wen-Ming Lin, Tsung-Yi Chen, Mei-Ling Chan, Yu-Sung Chang, Yan-Ru Lin, Yuan-Jin Lin, Yu-Han Shao, Chiung-An Chen, Shih-Lun Chen and Patricia Angela R. Abu
Bioengineering 2022, 9(12), 777; https://doi.org/10.3390/bioengineering9120777 - 6 Dec 2022
Cited by 10 | Viewed by 2458
Abstract
Apical Lesions, one of the most common oral diseases, can be effectively detected in daily dental examinations by a periapical radiograph (PA). In the current popular endodontic treatment, most dentists spend a lot of time manually marking the lesion area. In order to [...] Read more.
Apical Lesions, one of the most common oral diseases, can be effectively detected in daily dental examinations by a periapical radiograph (PA). In the current popular endodontic treatment, most dentists spend a lot of time manually marking the lesion area. In order to reduce the burden on dentists, this paper proposes a convolutional neural network (CNN)-based regional analysis model for spical lesions for periapical radiographs. In this study, the database was provided by dentists with more than three years of practical experience, meeting the criteria for clinical practical application. The contributions of this work are (1) an advanced adaptive threshold preprocessing technique for image segmentation, which can achieve an accuracy rate of more than 96%; (2) a better and more intuitive apical lesions symptom enhancement technique; and (3) a model for apical lesions detection with an accuracy as high as 96.21%. Compared with existing state-of-the-art technology, the proposed model has improved the accuracy by more than 5%. The proposed model has successfully improved the automatic diagnosis of apical lesions. With the help of automation, dentists can focus more on technical and medical diagnoses, such as treatment, tooth cleaning, or medical communication. This proposal has been certified by the Institutional Review Board (IRB) with the certification number 202002030B0. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Graphical abstract

18 pages, 3739 KiB  
Article
Accurate Image Reconstruction in Dual-Energy CT with Limited-Angular-Range Data Using a Two-Step Method
by Buxin Chen, Zheng Zhang, Dan Xia, Emil Y. Sidky, Taly Gilat-Schmidt and Xiaochuan Pan
Bioengineering 2022, 9(12), 775; https://doi.org/10.3390/bioengineering9120775 - 6 Dec 2022
Cited by 5 | Viewed by 2137
Abstract
Dual-energy CT (DECT) with scans over limited-angular ranges (LARs) may allow reductions in scan time and radiation dose and avoidance of possible collision between the moving parts of a scanner and the imaged object. The beam-hardening (BH) and LAR effects are two sources [...] Read more.
Dual-energy CT (DECT) with scans over limited-angular ranges (LARs) may allow reductions in scan time and radiation dose and avoidance of possible collision between the moving parts of a scanner and the imaged object. The beam-hardening (BH) and LAR effects are two sources of image artifacts in DECT with LAR data. In this work, we investigate a two-step method to correct for both BH and LAR artifacts in order to yield accurate image reconstruction in DECT with LAR data. From low- and high-kVp LAR data in DECT, we first use a data-domain decomposition (DDD) algorithm to obtain LAR basis data with the non-linear BH effect corrected for. We then develop and tailor a directional-total-variation (DTV) algorithm to reconstruct from the LAR basis data obtained basis images with the LAR effect compensated for. Finally, using the basis images reconstructed, we create virtual monochromatic images (VMIs), and estimate physical quantities such as iodine concentrations and effective atomic numbers within the object imaged. We conduct numerical studies using two digital phantoms of different complexity levels and types of structures. LAR data of low- and high-kVp are generated from the phantoms over both single-arc (SA) and two-orthogonal-arc (TOA) LARs ranging from 14 to 180. Visual inspection and quantitative assessment of VMIs obtained reveal that the two-step method proposed can yield VMIs in which both BH and LAR artifacts are reduced, and estimation accuracy of physical quantities is improved. In addition, concerning SA and TOA scans with the same total LAR, the latter is shown to yield more accurate images and physical quantity estimations than the former. We investigate a two-step method that combines the DDD and DTV algorithms to correct for both BH and LAR artifacts in image reconstruction, yielding accurate VMIs and estimations of physical quantities, from low- and high-kVp LAR data in DECT. The results and knowledge acquired in the work on accurate image reconstruction in LAR DECT may give rise to further understanding and insights into the practical design of LAR scan configurations and reconstruction procedures for DECT applications. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

20 pages, 3762 KiB  
Article
Cervical Net: A Novel Cervical Cancer Classification Using Feature Fusion
by Hiam Alquran, Mohammed Alsalatie, Wan Azani Mustafa, Rabah Al Abdi and Ahmad Rasdan Ismail
Bioengineering 2022, 9(10), 578; https://doi.org/10.3390/bioengineering9100578 - 19 Oct 2022
Cited by 38 | Viewed by 4098
Abstract
Cervical cancer, a common chronic disease, is one of the most prevalent and curable cancers among women. Pap smear images are a popular technique for screening cervical cancer. This study proposes a computer-aided diagnosis for cervical cancer utilizing the novel Cervical Net deep [...] Read more.
Cervical cancer, a common chronic disease, is one of the most prevalent and curable cancers among women. Pap smear images are a popular technique for screening cervical cancer. This study proposes a computer-aided diagnosis for cervical cancer utilizing the novel Cervical Net deep learning (DL) structures and feature fusion with Shuffle Net structural features. Image acquisition and enhancement, feature extraction and selection, as well as classification are the main steps in our cervical cancer screening system. Automated features are extracted using pre-trained convolutional neural networks (CNN) fused with a novel Cervical Net structure in which 544 resultant features are obtained. To minimize dimensionality and select the most important features, principal component analysis (PCA) is used as well as canonical correlation analysis (CCA) to obtain the best discriminant features for five classes of Pap smear images. Here, five different machine learning (ML) algorithms are fed into these features. The proposed strategy achieved the best accuracy ever obtained using a support vector machine (SVM), in which fused features between Cervical Net and Shuffle Net is 99.1% for all classes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

19 pages, 12786 KiB  
Article
NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM
by Nicola Altini, Antonio Brunetti, Emilia Puro, Maria Giovanna Taccogna, Concetta Saponaro, Francesco Alfredo Zito, Simona De Summa and Vitoantonio Bevilacqua
Bioengineering 2022, 9(9), 475; https://doi.org/10.3390/bioengineering9090475 - 15 Sep 2022
Cited by 18 | Viewed by 3808
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth [...] Read more.
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Graphical abstract

17 pages, 12094 KiB  
Article
A Fully Unsupervised Deep Learning Framework for Non-Rigid Fundus Image Registration
by Giovana A. Benvenuto, Marilaine Colnago, Maurício A. Dias, Rogério G. Negri, Erivaldo A. Silva and Wallace Casaca
Bioengineering 2022, 9(8), 369; https://doi.org/10.3390/bioengineering9080369 - 5 Aug 2022
Cited by 5 | Viewed by 2696
Abstract
In ophthalmology, the registration problem consists of finding a geometric transformation that aligns a pair of images, supporting eye-care specialists who need to record and compare images of the same patient. Considering the registration methods for handling eye fundus images, the literature offers [...] Read more.
In ophthalmology, the registration problem consists of finding a geometric transformation that aligns a pair of images, supporting eye-care specialists who need to record and compare images of the same patient. Considering the registration methods for handling eye fundus images, the literature offers only a limited number of proposals based on deep learning (DL), whose implementations use the supervised learning paradigm to train a model. Additionally, ensuring high-quality registrations while still being flexible enough to tackle a broad range of fundus images is another drawback faced by most existing methods in the literature. Therefore, in this paper, we address the above-mentioned issues by introducing a new DL-based framework for eye fundus registration. Our methodology combines a U-shaped fully convolutional neural network with a spatial transformation learning scheme, where a reference-free similarity metric allows the registration without assuming any pre-annotated or artificially created data. Once trained, the model is able to accurately align pairs of images captured under several conditions, which include the presence of anatomical differences and low-quality photographs. Compared to other registration methods, our approach achieves better registration outcomes by just passing as input the desired pair of fundus images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Image Processing and Segmentation)
Show Figures

Figure 1

Back to TopTop