Deep Learning in Medical Image Analysis, Volume II

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 23734

Special Issue Editors


grade E-Mail Website
Guest Editor
Informatics Building School of Informatics, University of Leicester, Leicester LE1 7RH, UK
Interests: explainable deep learning; medical image analysis; pattern recognition and medical sensors; artificial intelligence; intelligent computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Molecular Imaging and Neuropathology Division, Columbia University and New York State Psychiatric Institute, New York, NY 10032, USA
2. New York State Psychiatric Institute, New York, NY 10032, USA
Interests: structural mechanics; computational mechanics; contact mechanics; efficient solvers; interfaces; modeling; applications in mechanical and civil engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the past few years, deep learning has established itself as a powerful tool across a broad spectrum of domains in imaging, e.g., classification, prediction, detection, segmentation, diagnosis, interpretation, and reconstruction. While deep neural networks initially found nurture in the computer vision community, they have quickly spread over medical imaging applications.

The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and in understanding the underlying biological process.

The purpose of this Special Issue on “Deep Learning in Medical Image Analysis, Volume II” is to present and highlight novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis.

Prof. Dr. Yudong Zhang
Prof. Dr. Juan Manuel Gorriz
Prof. Dr. Zhengchao Dong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • transfer learning
  • deep neural network
  • convolutional neural network
  • graph neural network
  • multitask learning
  • explainable AI
  • attention mechanism
  • biomedical engineering
  • multimodal imaging
  • semantic segmentation
  • image reconstruction
  • healthcare

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 5837 KiB  
Article
Adaptation to CT Reconstruction Kernels by Enforcing Cross-Domain Feature Maps Consistency
by Stanislav Shimovolos, Andrey Shushko, Mikhail Belyaev and Boris Shirokikh
J. Imaging 2022, 8(9), 234; https://doi.org/10.3390/jimaging8090234 - 30 Aug 2022
Viewed by 2196
Abstract
Deep learning methods provide significant assistance in analyzing coronavirus disease (COVID-19) in chest computed tomography (CT) images, including identification, severity assessment, and segmentation. Although the earlier developed methods address the lack of data and specific annotations, the current goal is to build a [...] Read more.
Deep learning methods provide significant assistance in analyzing coronavirus disease (COVID-19) in chest computed tomography (CT) images, including identification, severity assessment, and segmentation. Although the earlier developed methods address the lack of data and specific annotations, the current goal is to build a robust algorithm for clinical use, having a larger pool of available data. With the larger datasets, the domain shift problem arises, affecting the performance of methods on the unseen data. One of the critical sources of domain shift in CT images is the difference in reconstruction kernels used to generate images from the raw data (sinograms). In this paper, we show a decrease in the COVID-19 segmentation quality of the model trained on the smooth and tested on the sharp reconstruction kernels. Furthermore, we compare several domain adaptation approaches to tackle the problem, such as task-specific augmentation and unsupervised adversarial learning. Finally, we propose the unsupervised adaptation method, called F-Consistency, that outperforms the previous approaches. Our method exploits a set of unlabeled CT image pairs which differ only in reconstruction kernels within every pair. It enforces the similarity of the network’s hidden representations (feature maps) by minimizing the mean squared error (MSE) between paired feature maps. We show our method achieving a 0.64 Dice Score on the test dataset with unseen sharp kernels, compared to the 0.56 Dice Score of the baseline model. Moreover, F-Consistency scores 0.80 Dice Score between predictions on the paired images, which almost doubles the baseline score of 0.46 and surpasses the other methods. We also show F-Consistency to better generalize on the unseen kernels and without the presence of the COVID-19 lesions than the other methods trained on unlabeled data. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

25 pages, 101469 KiB  
Article
StainCUT: Stain Normalization with Contrastive Learning
by José Carlos Gutiérrez Pérez, Daniel Otero Baguer and Peter Maass
J. Imaging 2022, 8(7), 202; https://doi.org/10.3390/jimaging8070202 - 20 Jul 2022
Cited by 7 | Viewed by 3832
Abstract
In recent years, numerous deep-learning approaches have been developed for the analysis of histopathology Whole Slide Images (WSI). A recurrent issue is the lack of generalization ability of a model that has been trained with images of one laboratory and then used to [...] Read more.
In recent years, numerous deep-learning approaches have been developed for the analysis of histopathology Whole Slide Images (WSI). A recurrent issue is the lack of generalization ability of a model that has been trained with images of one laboratory and then used to analyze images of a different laboratory. This occurs mainly due to the use of different scanners, laboratory procedures, and staining variations. This can produce strong color differences, which change not only the characteristics of the image, such as the contrast, brightness, and saturation, but also create more complex style variations. In this paper, we present a deep-learning solution based on contrastive learning to transfer from one staining style to another: StainCUT. This method eliminates the need to choose a reference frame and does not need paired images with different staining to learn the mapping between the stain distributions. Additionally, it does not rely on the CycleGAN approach, which makes the method efficient in terms of memory consumption and running time. We evaluate the model using two datasets that consist of the same specimens digitized with two different scanners. We also apply it as a preprocessing step for the semantic segmentation of metastases in lymph nodes. The model was trained on data from one of the laboratories and evaluated on data from another. The results validate the hypothesis that stain normalization indeed improves the performance of the model. Finally, we also investigate and compare the application of the stain normalization step during the training of the model and at inference. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

12 pages, 2015 KiB  
Article
Comparison of Ultrasound Image Classifier Deep Learning Algorithms for Shrapnel Detection
by Emily N. Boice, Sofia I. Hernandez-Torres and Eric J. Snider
J. Imaging 2022, 8(5), 140; https://doi.org/10.3390/jimaging8050140 - 20 May 2022
Cited by 11 | Viewed by 2824
Abstract
Ultrasound imaging is essential in emergency medicine and combat casualty care, oftentimes used as a critical triage tool. However, identifying injuries, such as shrapnel embedded in tissue or a pneumothorax, can be challenging without extensive ultrasonography training, which may not be available in [...] Read more.
Ultrasound imaging is essential in emergency medicine and combat casualty care, oftentimes used as a critical triage tool. However, identifying injuries, such as shrapnel embedded in tissue or a pneumothorax, can be challenging without extensive ultrasonography training, which may not be available in prolonged field care or emergency medicine scenarios. Artificial intelligence can simplify this by automating image interpretation but only if it can be deployed for use in real time. We previously developed a deep learning neural network model specifically designed to identify shrapnel in ultrasound images, termed ShrapML. Here, we expand on that work to further optimize the model and compare its performance to that of conventional models trained on the ImageNet database, such as ResNet50. Through Bayesian optimization, the model’s parameters were further refined, resulting in an F1 score of 0.98. We compared the proposed model to four conventional models: DarkNet-19, GoogleNet, MobileNetv2, and SqueezeNet which were down-selected based on speed and testing accuracy. Although MobileNetv2 achieved a higher accuracy than ShrapML, there was a tradeoff between accuracy and speed, with ShrapML being 10× faster than MobileNetv2. In conclusion, real-time deployment of algorithms such as ShrapML can reduce the cognitive load for medical providers in high-stress emergency or miliary medicine scenarios. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

15 pages, 1403 KiB  
Article
Addressing Motion Blurs in Brain MRI Scans Using Conditional Adversarial Networks and Simulated Curvilinear Motions
by Shangjin Li and Yijun Zhao
J. Imaging 2022, 8(4), 84; https://doi.org/10.3390/jimaging8040084 - 23 Mar 2022
Viewed by 2957
Abstract
In-scanner head motion often leads to degradation in MRI scans and is a major source of error in diagnosing brain abnormalities. Researchers have explored various approaches, including blind and nonblind deconvolutions, to correct the motion artifacts in MRI scans. Inspired by the recent [...] Read more.
In-scanner head motion often leads to degradation in MRI scans and is a major source of error in diagnosing brain abnormalities. Researchers have explored various approaches, including blind and nonblind deconvolutions, to correct the motion artifacts in MRI scans. Inspired by the recent success of deep learning models in medical image analysis, we investigate the efficacy of employing generative adversarial networks (GANs) to address motion blurs in brain MRI scans. We cast the problem as a blind deconvolution task where a neural network is trained to guess a blurring kernel that produced the observed corruption. Specifically, our study explores a new approach under the sparse coding paradigm where every ground truth corrupting kernel is assumed to be a “combination” of a relatively small universe of “basis” kernels. This assumption is based on the intuition that, on small distance scales, patients’ moves follow simple curves and that complex motions can be obtained by combining a number of simple ones. We show that, with a suitably dense basis, a neural network can effectively guess the degrading kernel and reverse some of the damage in the motion-affected real-world scans. To this end, we generated 10,000 continuous and curvilinear kernels in random positions and directions that are likely to uniformly populate the space of corrupting kernels in real-world scans. We further generated a large dataset of 225,000 pairs of sharp and blurred MR images to facilitate training effective deep learning models. Our experimental results demonstrate the viability of the proposed approach evaluated using synthetic and real-world MRI scans. Our study further suggests there is merit in exploring separate models for the sagittal, axial, and coronal planes. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

18 pages, 1042 KiB  
Article
Per-COVID-19: A Benchmark Dataset for COVID-19 Percentage Estimation from CT-Scans
by Fares Bougourzi, Cosimo Distante, Abdelkrim Ouafi, Fadi Dornaika, Abdenour Hadid and Abdelmalik Taleb-Ahmed
J. Imaging 2021, 7(9), 189; https://doi.org/10.3390/jimaging7090189 - 18 Sep 2021
Cited by 16 | Viewed by 4156
Abstract
COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition [...] Read more.
COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition to the recognition of the COVID-19 infection, CT scans can provide more important information about the evolution of this disease and its severity. With the extensive number of COVID-19 infections, estimating the COVID-19 percentage can help the intensive care to free up the resuscitation beds for the critical cases and follow other protocol for less severity cases. In this paper, we introduce COVID-19 percentage estimation dataset from CT-scans, where the labeling process was accomplished by two expert radiologists. Moreover, we evaluate the performance of three Convolutional Neural Network (CNN) architectures: ResneXt-50, Densenet-161, and Inception-v3. For the three CNN architectures, we use two loss functions: MSE and Dynamic Huber. In addition, two pretrained scenarios are investigated (ImageNet pretrained models and pretrained models using X-ray data). The evaluated approaches achieved promising results on the estimation of COVID-19 infection. Inception-v3 using Dynamic Huber loss function and pretrained models using X-ray data achieved the best performance for slice-level results: 0.9365, 5.10, and 9.25 for Pearson Correlation coefficient (PC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE), respectively. On the other hand, the same approach achieved 0.9603, 4.01, and 6.79 for PCsubj, MAEsubj, and RMSEsubj, respectively, for subject-level results. These results prove that using CNN architectures can provide accurate and fast solution to estimate the COVID-19 infection percentage for monitoring the evolution of the patient state. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

10 pages, 1373 KiB  
Article
How Can a Deep Learning Algorithm Improve Fracture Detection on X-rays in the Emergency Room?
by Guillaume Reichert, Ali Bellamine, Matthieu Fontaine, Beatrice Naipeanu, Adrien Altar, Elodie Mejean, Nicolas Javaud and Nathalie Siauve
J. Imaging 2021, 7(7), 105; https://doi.org/10.3390/jimaging7070105 - 25 Jun 2021
Cited by 11 | Viewed by 6033
Abstract
The growing need for emergency imaging has greatly increased the number of conventional X-rays, particularly for traumatic injury. Deep learning (DL) algorithms could improve fracture screening by radiologists and emergency room (ER) physicians. We used an algorithm developed for the detection of appendicular [...] Read more.
The growing need for emergency imaging has greatly increased the number of conventional X-rays, particularly for traumatic injury. Deep learning (DL) algorithms could improve fracture screening by radiologists and emergency room (ER) physicians. We used an algorithm developed for the detection of appendicular skeleton fractures and evaluated its performance for detecting traumatic fractures on conventional X-rays in the ER, without the need for training on local data. This algorithm was tested on all patients (N = 125) consulting at the Louis Mourier ER in May 2019 for limb trauma. Patients were selected by two emergency physicians from the clinical database used in the ER. Their X-rays were exported and analyzed by a radiologist. The prediction made by the algorithm and the annotation made by the radiologist were compared. For the 125 patients included, 25 patients with a fracture were identified by the clinicians, 24 of whom were identified by the algorithm (sensitivity of 96%). The algorithm incorrectly predicted a fracture in 14 of the 100 patients without fractures (specificity of 86%). The negative predictive value was 98.85%. This study shows that DL algorithms are potentially valuable diagnostic tools for detecting fractures in the ER and could be used in the training of junior radiologists. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

Back to TopTop