Deep Learning for Medical Images: Challenges and Solutions

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (1 June 2022) | Viewed by 42000

Special Issue Editor


E-Mail Website
Guest Editor
School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon 440-746, Republic of Korea
Interests: deep learning; image/video signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, with the great success of artificial intelligence (AI) and deep learning (DL) in areas of natural images, adapting and further developing DL techniques to medical images is an important and relevant research challenge. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics, and healthcare in general. Medical images achieved by modalities including X-rays, magnetic resonance, microwaves, ultrasound, and optical methods have been used widely for clinical purposes. It is almost impossible for clinicians to diagnose diseases without the aid of medical images. However, interpreting these images manually is time-consuming, expensive, and varies depending on the clinician’s expertise. Advances in DL enable us to automatically extract more information from images more confidently than ever before. The techniques of AI and DL have played an important role in medical fields like medical image processing, computer-aided diagnosis, image interpretation, image fusion, image registration, image segmentation, etc.

This Special Issue calls for papers presenting novel works about medical image/video processing using DL and AI. Furthermore, high-quality review and survey papers are welcomed. The papers considered for possible publication may focus on, but not necessarily be limited to, the following areas:

  • Deep learning and artificial intelligence for medical image/video;
  • Image segmentation, registration, and fusion;
  • Image processing and analysis;
  • Image formation/reconstruction and image quality assessment;
  • Medical image analysis;
  • Computer aided diagnosis;
  • Machine learning of big data in imaging;
  • Integration of imaging with non-imaging biomarkers;
  • Visualization in medical imaging.

Prof. Dr. Jitae Shin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

9 pages, 512 KiB  
Article
Classification of Left and Right Coronary Arteries in Coronary Angiographies Using Deep Learning
by Christian Kim Eschen, Karina Banasik, Alex Hørby Christensen, Piotr Jaroslaw Chmura, Frants Pedersen, Lars Køber, Thomas Engstrøm, Anders Bjorholm Dahl, Søren Brunak and Henning Bundgaard
Electronics 2022, 11(13), 2087; https://doi.org/10.3390/electronics11132087 - 3 Jul 2022
Cited by 4 | Viewed by 3980
Abstract
Multi-frame X-ray images (videos) of the coronary arteries obtained using coronary angiography (CAG) provide detailed information about the anatomy and blood flow in the coronary arteries and play a pivotal role in diagnosing and treating ischemic heart disease. Deep learning has the potential [...] Read more.
Multi-frame X-ray images (videos) of the coronary arteries obtained using coronary angiography (CAG) provide detailed information about the anatomy and blood flow in the coronary arteries and play a pivotal role in diagnosing and treating ischemic heart disease. Deep learning has the potential to quickly and accurately quantify narrowings and blockages of the arteries from CAG videos. A CAG consists of videos acquired separately for the left coronary artery and the right coronary artery (LCA and RCA, respectively). The pathology for LCA and RCA is typically only reported for the entire CAG, and not for the individual videos. However, training of stenosis quantification models is difficult when the RCA and LCA information of the videos are unknown. Here, we present a deep learning-based approach for classifying LCA and RCA in CAG videos. Our approach enables linkage of videos with the reported pathological findings. We manually labeled 3545 and 520 videos (approximately seven videos per CAG) to enable training and testing of the models, respectively. We obtained F1 scores of 0.99 on the test set for LCA and RCA classification LCA and RCA classification on the test set. The classification performance was further investigated with extensive experiments across different model architectures (R(2+1)D, X3D, and MVIT), model input sizes, data augmentations, and the number of videos used for training. Our results showed that CAG videos could be accurately curated using deep learning, which is an essential preprocessing step for a downstream application in diagnostics of coronary artery disease. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

16 pages, 7697 KiB  
Article
Deep Feature Vectors Concatenation for Eye Disease Detection Using Fundus Image
by Radifa Hilya Paradisa, Alhadi Bustamam, Wibowo Mangunwardoyo, Andi Arus Victor, Anggun Rama Yudantha and Prasnurzaki Anki
Electronics 2022, 11(1), 23; https://doi.org/10.3390/electronics11010023 - 22 Dec 2021
Cited by 15 | Viewed by 5829
Abstract
Fundus image is an image that captures the back of the eye (retina), which plays an important role in the detection of a disease, including diabetic retinopathy (DR). It is the most common complication in diabetics that remains an important cause of visual [...] Read more.
Fundus image is an image that captures the back of the eye (retina), which plays an important role in the detection of a disease, including diabetic retinopathy (DR). It is the most common complication in diabetics that remains an important cause of visual impairment, especially in the young and economically active age group. In patients with DR, early diagnosis can effectively help prevent the risk of vision loss. DR screening was performed by an ophthalmologist by analysing the lesions on the fundus image. However, the increasing prevalence of DR is not proportional to the availability of ophthalmologists who can read fundus images. It can lead to delayed prevention and management of DR. Therefore, there is a need for an automated diagnostic system as it can help ophthalmologists increase the efficiency of the diagnostic process. This paper provides a deep learning approach with the concatenate model for fundus image classification with three classes: no DR, non-proliferative diabetic retinopathy (NPDR), and proliferative diabetic retinopathy (PDR). The model architecture used is DenseNet121 and Inception-ResNetV2. The feature extraction results from the two models are combined and classified using the multilayer perceptron (MLP) method. The method that we propose gives an improvement compared to a single model with the results of accuracy, and average precision and recall of 91% and 90% for the F1-score, respectively. This experiment demonstrates that our proposed deep-learning approach is effective for the automatic DR classification using fundus photo data. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

12 pages, 6077 KiB  
Article
Future Image Synthesis for Diabetic Retinopathy Based on the Lesion Occurrence Probability
by Sangil Ahn, Quang T.M. Pham, Jitae Shin and Su Jeong Song
Electronics 2021, 10(6), 726; https://doi.org/10.3390/electronics10060726 - 19 Mar 2021
Cited by 13 | Viewed by 2791
Abstract
Diabetic Retinopathy (DR) is one of the major causes of blindness. If the lesions observed in DR occur in the central part of the fundus, it can cause severe vision loss, and we call this symptom Diabetic Macular Edema (DME). All patients with [...] Read more.
Diabetic Retinopathy (DR) is one of the major causes of blindness. If the lesions observed in DR occur in the central part of the fundus, it can cause severe vision loss, and we call this symptom Diabetic Macular Edema (DME). All patients with DR potentially have DME since DME can occur in every stage of DR. While synthesizing future fundus images, the task of predicting the progression of the disease state is very challenging since we need a lot of longitudinal data over a long period of time. Even if the longitudinal data are collected, there is a pixel-level difference between the current fundus image and the target future image. It is difficult to train a model based on deep learning for synthesizing future fundus images that considers the lesion change. In this paper, we synthesize future fundus images by considering the progression of the disease with a two-step training approach to overcome these problems. In the first step, we concentrate on synthesizing a realistic fundus image using only a lesion segmentation mask and vessel segmentation mask from a large dataset for a fundus generator. In the second step, we train a lesion probability predictor to create a probability map that contains the occurrence probability information of the lesion. Finally, based on the probability map and current vessel, the pre-trained fundus generator synthesizes a predicted future fundus image. We visually demonstrate not only the capacity of the fundus generator that can control the pathological information but also the prediction of the disease progression on fundus images generated by our framework. Our framework achieves an F1-score of 0.74 for predicting DR severity and 0.91 for predicting DME occurrence. We demonstrate that our framework has a meaningful capability by comparing the scores of each class of DR severity, which are obtained by passing the predicted future image and real future image through an evaluation model. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

25 pages, 1666 KiB  
Article
Numerical Evaluation on Parametric Choices Influencing Segmentation Results in Radiology Images—A Multi-Dataset Study
by Pravda Jith Ray Prasad, Shanmugapriya Survarachakan, Zohaib Amjad Khan, Frank Lindseth, Ole Jakob Elle, Fritz Albregtsen and Rahul Prasanna Kumar
Electronics 2021, 10(4), 431; https://doi.org/10.3390/electronics10040431 - 10 Feb 2021
Cited by 5 | Viewed by 2868
Abstract
Medical image segmentation has gained greater attention over the past decade, especially in the field of image-guided surgery. Here, robust, accurate and fast segmentation tools are important for planning and navigation. In this work, we explore the Convolutional Neural Network (CNN) based approaches [...] Read more.
Medical image segmentation has gained greater attention over the past decade, especially in the field of image-guided surgery. Here, robust, accurate and fast segmentation tools are important for planning and navigation. In this work, we explore the Convolutional Neural Network (CNN) based approaches for multi-dataset segmentation from CT examinations. We hypothesize that selection of certain parameters in the network architecture design critically influence the segmentation results. We have employed two different CNN architectures, 3D-UNet and VGG-16, given that both networks are well accepted in the medical domain for segmentation tasks. In order to understand the efficiency of different parameter choices, we have adopted two different approaches. The first one combines different weight initialization schemes with different activation functions, whereas the second approach combines different weight initialization methods with a set of loss functions and optimizers. For evaluation, the 3D-UNet was trained with the Medical Segmentation Decathlon dataset and VGG-16 using LiTS data. The quality assessment done using eight quantitative metrics enhances the probability of using our proposed strategies for enhancing the segmentation results. Following a systematic approach in the evaluation of the results, we propose a few strategies that can be adopted for obtaining good segmentation results. Both of the architectures used in this work were selected on the basis of general acceptance in segmentation tasks for medical images based on their promising results compared to other state-of-the art networks. The highest Dice score obtained in 3D-UNet for the liver, pancreas and cardiac data was 0.897, 0.691 and 0.892. In the case of VGG-16, it was solely developed to work with liver data and delivered a Dice score of 0.921. From all the experiments conducted, we observed that two of the combinations with Xavier weight initialization (also known as Glorot), Adam optimiser, Cross Entropy loss (GloCEAdam) and LeCun weight initialization, cross entropy loss and Adam optimiser LecCEAdam worked best for most of the metrics in a 3D-UNet setting, while Xavier together with cross entropy loss and Tanh activation function (GloCEtanh) worked best for the VGG-16 network. Here, the parameter combinations are proposed on the basis of their contributions in obtaining optimal outcomes in segmentation evaluations. Moreover, we discuss that the preliminary evaluation results show that these parameters could later on be used for gaining more insights into model convergence and optimal solutions.The results from the quality assessment metrics and the statistical analysis validate our conclusions and we propose that the presented work can be used as a guide in choosing parameters for the best possible segmentation results for future works. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

11 pages, 2756 KiB  
Article
A Self-Spatial Adaptive Weighting Based U-Net for Image Segmentation
by Choongsang Cho, Young Han Lee, Jongyoul Park and Sangkeun Lee
Electronics 2021, 10(3), 348; https://doi.org/10.3390/electronics10030348 - 2 Feb 2021
Cited by 8 | Viewed by 2619
Abstract
Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. [...] Read more.
Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

15 pages, 5595 KiB  
Article
Development of Decision Support Software for Deep Learning-Based Automated Retinal Disease Screening Using Relatively Limited Fundus Photograph Data
by JoonHo Lee, Joonseok Lee, Sooah Cho, JiEun Song, Minyoung Lee, Sung Ho Kim, Jin Young Lee, Dae Hwan Shin, Joon Mo Kim, Jung Hun Bae, Su Jeong Song, Min Sagong and Donggeun Park
Electronics 2021, 10(2), 163; https://doi.org/10.3390/electronics10020163 - 13 Jan 2021
Cited by 13 | Viewed by 3412
Abstract
Purpose—This study was conducted to develop an automated detection algorithm for screening fundus abnormalities, including age-related macular degeneration (AMD), diabetic retinopathy (DR), epiretinal membrane (ERM), retinal vascular occlusion (RVO), and suspected glaucoma among health screening program participants. Methods—The development dataset consisted of 43,221 [...] Read more.
Purpose—This study was conducted to develop an automated detection algorithm for screening fundus abnormalities, including age-related macular degeneration (AMD), diabetic retinopathy (DR), epiretinal membrane (ERM), retinal vascular occlusion (RVO), and suspected glaucoma among health screening program participants. Methods—The development dataset consisted of 43,221 retinal fundus photographs (from 25,564 participants, mean age 53.38 ± 10.97 years, female 39.0%) from a health screening program and patients of the Kangbuk Samsung Hospital Ophthalmology Department from 2006 to 2017. We evaluated our screening algorithm on independent validation datasets. Five separate one-versus-rest (OVR) classification algorithms based on deep convolutional neural networks (CNNs) were trained to detect AMD, ERM, DR, RVO, and suspected glaucoma. The ground truth for both development and validation datasets was graded at least two times by three ophthalmologists. The area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were calculated for each disease, as well as their macro-averages. Results—For the internal validation dataset, the average sensitivity was 0.9098 (95% confidence interval (CI), 0.8660–0.9536), the average specificity was 0.9079 (95% CI, 0.8576–0.9582), and the overall accuracy was 0.9092 (95% CI, 0.8769–0.9415). For the external validation dataset consisting of 1698 images, the average of the AUCs was 0.9025 (95% CI, 0.8671–0.9379). Conclusions—Our algorithm had high sensitivity and specificity for detecting major fundus abnormalities. Our study will facilitate expansion of the applications of deep learning-based computer-aided diagnostic decision support tools in actual clinical settings. Further research is needed to improved generalization for this algorithm. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

11 pages, 13539 KiB  
Article
Automatic Drusen Segmentation for Age-Related Macular Degeneration in Fundus Images Using Deep Learning
by Quang T. M. Pham, Sangil Ahn, Su Jeong Song and Jitae Shin
Electronics 2020, 9(10), 1617; https://doi.org/10.3390/electronics9101617 - 1 Oct 2020
Cited by 15 | Viewed by 3982
Abstract
Drusen are the main aspect of detecting age-related macular degeneration (AMD). Ophthalmologists can evaluate the condition of AMD based on drusen in fundus images. However, in the early stage of AMD, the drusen areas are usually small and vague. This leads to challenges [...] Read more.
Drusen are the main aspect of detecting age-related macular degeneration (AMD). Ophthalmologists can evaluate the condition of AMD based on drusen in fundus images. However, in the early stage of AMD, the drusen areas are usually small and vague. This leads to challenges in the drusen segmentation task. Moreover, due to the high-resolution fundus images, it is hard to accurately predict the drusen areas with deep learning models. In this paper, we propose a multi-scale deep learning model for drusen segmentation. By exploiting both local and global information, we can improve the performance, especially in the early stages of AMD cases. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

13 pages, 525 KiB  
Article
Automatic Diabetic Retinopathy Grading via Self-Knowledge Distillation
by Ling Luo, Dingyu Xue and Xinglong Feng
Electronics 2020, 9(9), 1337; https://doi.org/10.3390/electronics9091337 - 19 Aug 2020
Cited by 34 | Viewed by 4001
Abstract
Diabetic retinopathy (DR) is a common fundus disease that leads to irreversible blindness, which plagues the working-age population. Automatic medical imaging diagnosis provides a non-invasive method to assist ophthalmologists in timely screening of suspected DR cases, which prevents its further deterioration. However, the [...] Read more.
Diabetic retinopathy (DR) is a common fundus disease that leads to irreversible blindness, which plagues the working-age population. Automatic medical imaging diagnosis provides a non-invasive method to assist ophthalmologists in timely screening of suspected DR cases, which prevents its further deterioration. However, the state-of-the-art deep-learning-based methods generally have a large amount of model parameters, which makes large-scale clinical deployment a time-consuming task. Moreover, the severity of DR is associated with lesions, and it is difficult for the model to focus on these regions. In this paper, we propose a novel deep-learning technique for grading DR with only image-level supervision. Specifically, we first customize the model with the help of self-knowledge distillation to achieve a trade-off between model performance and time complexity. Secondly, CAM-Attention is used to allow the network to focus on discriminative zone, e.g., microaneurysms, soft/hard exudates, etc.. Considering that directly attaching a classifier after the Side branch will disrupt the hierarchical nature of convolutional neural networks, a Mimicking Module is employed that allows the Side branch to actively mimic the main branch structure. Extensive experiments are conducted on two benchmark datasets, with an AUC of 0.965 and an accuracy of 92.9% for the Messidor dataset and 67.96% accuracy achieved for the challenging IDRID dataset, which demonstrates the superior performance of our proposed method. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

21 pages, 8933 KiB  
Article
Optimizing the Performance of Breast Cancer Classification by Employing the Same Domain Transfer Learning from Hybrid Deep Convolutional Neural Network Model
by Laith Alzubaidi, Omran Al-Shamma, Mohammed A. Fadhel, Laith Farhan, Jinglan Zhang and Ye Duan
Electronics 2020, 9(3), 445; https://doi.org/10.3390/electronics9030445 - 6 Mar 2020
Cited by 123 | Viewed by 10149
Abstract
Breast cancer is a significant factor in female mortality. An early cancer diagnosis leads to a reduction in the breast cancer death rate. With the help of a computer-aided diagnosis system, the efficiency increased, and the cost was reduced for the cancer diagnosis. [...] Read more.
Breast cancer is a significant factor in female mortality. An early cancer diagnosis leads to a reduction in the breast cancer death rate. With the help of a computer-aided diagnosis system, the efficiency increased, and the cost was reduced for the cancer diagnosis. Traditional breast cancer classification techniques are based on handcrafted features techniques, and their performance relies upon the chosen features. They also are very sensitive to different sizes and complex shapes. However, histopathological breast cancer images are very complex in shape. Currently, deep learning models have become an alternative solution for diagnosis, and have overcome the drawbacks of classical classification techniques. Although deep learning has performed well in various tasks of computer vision and pattern recognition, it still has some challenges. One of the main challenges is the lack of training data. To address this challenge and optimize the performance, we have utilized a transfer learning technique which is where the deep learning models train on a task, and then fine-tune the models for another task. We have employed transfer learning in two ways: Training our proposed model first on the same domain dataset, then on the target dataset, and training our model on a different domain dataset, then on the target dataset. We have empirically proven that the same domain transfer learning optimized the performance. Our hybrid model of parallel convolutional layers and residual links is utilized to classify hematoxylin–eosin-stained breast biopsy images into four classes: invasive carcinoma, in-situ carcinoma, benign tumor and normal tissue. To reduce the effect of overfitting, we have augmented the images with different image processing techniques. The proposed model achieved state-of-the-art performance, and it outperformed the latest methods by achieving a patch-wise classification accuracy of 90.5%, and an image-wise classification accuracy of 97.4% on the validation set. Moreover, we have achieved an image-wise classification accuracy of 96.1% on the test set of the microscopy ICIAR-2018 dataset. Full article
(This article belongs to the Special Issue Deep Learning for Medical Images: Challenges and Solutions)
Show Figures

Figure 1

Back to TopTop