Next Article in Journal
An Update of the Possible Applications of Magnetic Resonance Imaging (MRI) in Dentistry: A Literature Review
Next Article in Special Issue
How Can a Deep Learning Algorithm Improve Fracture Detection on X-rays in the Emergency Room?
Previous Article in Journal
Iterative-Trained Semi-Blind Deconvolution Algorithm to Compensate Straylight in Retinal Images
Previous Article in Special Issue
Quantitative Comparison of Deep Learning-Based Image Reconstruction Methods for Low-Dose and Sparse-Angle CT Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Deep Learning in Medical Image Analysis

1
School of Informatics, University of Leicester, Leicester LE1 7RH, UK
2
Department of Signal Theory, Telematics and Communications, University of Granada, 18071 Granada, Spain
3
Molecular Imaging and Neuropathology Division, Columbia University and New York State Psychiatric Institute, New York, NY 10032, USA
*
Author to whom correspondence should be addressed.
J. Imaging 2021, 7(4), 74; https://doi.org/10.3390/jimaging7040074
Submission received: 12 April 2021 / Accepted: 16 April 2021 / Published: 20 April 2021
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Over recent years, deep learning (DL) has established itself as a powerful tool across a broad spectrum of domains in imaging—e.g., classification, prediction, detection, segmentation, diagnosis, interpretation, reconstruction, etc. While deep neural networks were initially nurtured in the computer vision community, they have quickly spread over medical imaging applications.
The accelerating power of DL in diagnosing diseases will empower physicians and speed-up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have led to enormous amounts of medical images being generated in recent years. In the big data arena, new DL methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucial for clinical applications and understanding the underlying biological process.
The purpose of this Special Issue (SI) “Deep Learning in Medical Image Analysis” is to present and highlight novel algorithms, architectures, techniques, and applications of DL for medical image analysis.
This SI called for papers in April 2020. It received more than 60 submissions from over 30 different countries. After strict peer reviews, only 22 papers were accepted and published. A total of 18 papers are research articles and the remaining 4 are review papers.
Leuschner and Schmidt (2021) [1] from Germany, the Netherlands, and Canada present the results of a data challenge that the authors organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten-day sprint.
Shirokikh and Shevtsov (2021) [2] from Russia propose a new segmentation method with a human-like technique to segment a 3D study. Their method not only reduces the inference time from 10min to 15s, but also preserves state-of-the-art segmentation quality.
Zhang and Li (2021) [3] from China and the USA propose a meta-learning algorithm to augment the existing algorithms with the capability to learn from diverse segmentation tasks across the entire task distribution. The authors conduct experiments using a diverse set of segmentation tasks from the Medical Segmentation Decathlon and two meta-learning benchmarks.
Nannavecchia and Girardi (2021) [4] from Italy present a system able to automatically detect the causes of cardiac pathologies in electrocardiogram (ECG) signals from personal monitoring devices, with the aim to alert the patient to send the ECG to the medical specialist for a correct diagnosis and proper therapy.
Furtado (2021) [5] from Portugal takes on three different medical image segmentation problems: (i) segmentation of organs in magnetic resonance images, (ii) liver in computer tomography images, and (iii) diabetic retinopathy lesions in eye fundus images. The author quantifies loss functions and variations, as well as segmentation scores of different targets. The author concludes that dice is the best.
Shimizu and Hachiuma (2021) [6] from Japan combine three modules for localization, selection, and classification for the detection of the two surgical tools. In the localization module, the authors employ the Faster R-CNN to detect surgical tools and target hands, and in the classification module, the authors extract hand movement information by combining ResNet-18 and long short-term memory (LSTM) to classify two tools.
Bourouis and Alharbi (2021) [7] from Saudi Arabia and Canada introduce a new statistical framework to discriminate patients who are either negative or positive for certain kinds of virus and pneumonia. The authors tackle the current problem via a fully Bayesian approach based on a flexible statistical model named shifted-scaled Dirichlet mixture models.
Andrade and Teixeira (2021) [8] from Portugal present a technique to efficiently utilize the sizable number of dermoscopic images to improve the segmentation capacity of macroscopic skin lesion images. The quantitative segmentation results are demonstrated on the available macroscopic segmentation databases, SMARTSKINS and Dermofit Image Library.
Kandel and Castelli (2020) [9] from Portugal and Slovenia study an appropriate method to classify musculoskeletal images by transfer learning and by training from scratch. The authors apply six state-of-the-art architectures and compare their performances with transfer learning and with a network trained from scratch.
Comelli (2020) [10] from Italy presents an algorithm capable of achieving the volume reconstruction directly in 3D by leveraging an active surface algorithm. The results confirm that the active surface algorithm is superior to the active contour algorithm, outperforming an earlier approach on all the investigated anatomical districts with a dice similarity coefficient of 90.47 ± 2.36% for lung cancer, 88.30 ± 2.89% for head and neck cancer, and 90.29 ± 2.52% for brain cancer.
The methodology proposed by Ortega-Ruiz and Karabağ (2020) [11] from Mexico and the United Kingdom is based on traditional computer vision methods (K-means, watershed segmentation, Otsu’s binarization, and morphological operations), implementing color separation, segmentation, and feature extraction. The methodology is validated with the score assigned by two pathologists through the intraclass correlation coefficient.
The main aim of Kandel and Castelli (2020) [12] from Portugal and Slovenia is to improve the robustness of the classifier used by comparing six different first-order stochastic gradient-based optimizers to select the best for this particular dataset. Their results show that the adaptative-based optimizers achieved the highest results, except for AdaGrad, which achieved the lowest results.
La Barbera and Polónia (2020) [13] from Italy and Portugal employ a pipeline based on a cascade of deep neural network classifiers and multi-instance learning to detect the presence of HER2 from haematoxylin–eosin slides, which partly mimics the pathologist’s behavior by first recognizing cancer and then evaluating HER2.
Khoshdel and Asefi (2020) [14] from Canada employ a 3D convolutional neural network, based on the U-Net architecture, that takes in 3D images obtained using the contrast-source inversion method and attempts to produce the true 3D image of the permittivity.
Dupont and Kalinicheva (2020) [15] from France proposes a DL architecture that can detect changes in the eye fundus images and assess the progression of the disease. Their method is based on joint autoencoders and is fully unsupervised. Their algorithm has been applied to pairs of images from time series of different eye fundus images of 24 age-related macular degeneration patients.
Almeida and Santos (2020) [16] from Brazil propose a strategy for the analysis of skin images, aiming to choose the best mathematical classifier model for the identification of melanoma, with the objective of assisting the dermatologist in the identification of melanomas, especially towards an early diagnosis.
Tang and Kumar (2020) [17] from the USA propose a deep multimodal model that learns a joint representation from two types of connectomic data offered by fMRI scans. Their multimodal training strategy achieves a classification accuracy of 74% and a recall of 95%, as well as an F1 score of 0.805, and its overall performance is superior to that of using only one type of functional data.
In the work of Pintelas and Liaskos (2020) [18] from Greece, an accurate and interpretable machine learning framework is proposed for image classification problems able to make high quality explanations. Their results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy, which is also interpretable and explainable in simple human terms.
Kieu and Bade (2020) [19] from Malaysia and the United Kingdom present a taxonomy of the state-of-the-art DL-based lung disease detection systems, visualize the trends of recent work on the domain and identify the remaining issues and potential future directions in this domain.
In the survey of Debelee and Kebede (2020) [20] from Ethiopia and Germany, several DL-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. The result of the review process indicates that DL methods are the state-of-the-art in tumor detection, segmentation, feature extraction and classification.
Aruleba and Obaido (2020) [21] from South Africa provide a concise overview of past and present conventional diagnostics approaches in breast cancer detection. Further, the authors give an account of several computational models (machine learning, deep learning, and robotics), which have been developed and can serve as alternative techniques for breast cancer diagnostics imaging.
Singh and Sengupta (2020) [22] from Canada present a review of the current applications of explainable deep learning for different medical imaging tasks. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed in this review from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.
The 22 accepted papers in this SI are from 19 countries: Brazil, Canada, China, Ethiopia, France, Germany, Greece, Italy, Japan, Malaysia, Mexico, Netherlands, Portugal, Russia, Saudi Arabia, Slovenia, South Africa, the UK, and the USA.
All the three Guest Editors hope that this Special Issue “Deep Learning in Medical Image Analysis” will benefit the scientific community and contribute to the knowledge base, and would like to take this opportunity to applaud the contributions of all the authors in this Special Issue. The contributions and efforts of the reviewers to enhance the quality of the manuscripts are also much appreciated. It is also necessary to acknowledge the assistance provided by the MDPI editorial team who make our GE tasks much easier.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Leuschner, J.; Schmidt, M.; Ganguly, P.; Andriiashen, V.; Coban, S.; Denker, A.; Bauer, D.; Hadjifaradji, A.; Batenburg, K.; Maass, P.; et al. Quantitative Comparison of Deep Learning-Based Image Reconstruction Methods for Low-Dose and Sparse-Angle CT Applications. J. Imaging 2021, 7, 44. [Google Scholar] [CrossRef]
  2. Shirokikh, B.; Shevtsov, A.; Dalechina, A.; Krivov, E.; Kostjuchenko, V.; Golanov, A.; Gombolevskiy, V.; Morozov, S.; Belyaev, M. Accelerating 3D Medical Image Segmentation by Adaptive Small-Scale Target Localization. J. Imaging 2021, 7, 35. [Google Scholar] [CrossRef]
  3. Zhang, P.; Li, J.; Wang, Y.; Pan, J. Domain Adaptation for Medical Image Segmentation: A Meta-Learning Method. J. Imaging 2021, 7, 31. [Google Scholar] [CrossRef]
  4. Nannavecchia, A.; Girardi, F.; Fina, P.; Scalera, M.; DiMauro, G. Personal Heart Health Monitoring Based on 1D Convolutional Neural Network. J. Imaging 2021, 7, 26. [Google Scholar] [CrossRef]
  5. Furtado, P. Testing Segmentation Popular Loss and Variations in Three Multiclass Medical Imaging Problems. J. Imaging 2021, 7, 16. [Google Scholar] [CrossRef]
  6. Shimizu, T.; Hachiuma, R.; Kajita, H.; Takatsume, Y.; Saito, H. Hand Motion-Aware Surgical Tool Localization and Classification from an Egocentric Camera. J. Imaging 2021, 7, 15. [Google Scholar] [CrossRef]
  7. Bourouis, S.; Alharbi, A.; Bouguila, N. Bayesian Learning of Shifted-Scaled Dirichlet Mixture Models and Its Application to Early COVID-19 Detection in Chest X-ray Images. J. Imaging 2021, 7, 7. [Google Scholar] [CrossRef]
  8. Andrade, C.; Teixeira, L.F.; Vasconcelos, M.J.M.; Rosado, L. Data Augmentation Using Adversarial Image-to-Image Translation for the Segmentation of Mobile-Acquired Dermatological Images. J. Imaging 2021, 7, 2. [Google Scholar] [CrossRef]
  9. Kandel, I.; Castelli, M.; Popovič, A. Musculoskeletal Images Classification for Detection of Fractures Using Transfer Learning. J. Imaging 2020, 6, 127. [Google Scholar] [CrossRef]
  10. Comelli, A. Fully 3D Active Surface with Machine Learning for PET Image Segmentation. J. Imaging 2020, 6, 113. [Google Scholar] [CrossRef]
  11. Ortega-Ruiz, M.A.; Karabağ, C.; Garduño, V.G.; Reyes-Aldasoro, C.C. Morphological Estimation of Cellularity on Neo-Adjuvant Treated Breast Cancer Histological Images. J. Imaging 2020, 6, 101. [Google Scholar] [CrossRef]
  12. Kandel, I.; Castelli, M.; Popovič, A. Comparative Study of First Order Optimizers for Image Classification Using Convolutional Neural Networks on Histopathology Images. J. Imaging 2020, 6, 92. [Google Scholar] [CrossRef]
  13. La Barbera, D.; Polónia, A.; Roitero, K.; Conde-Sousa, E.; Della Mea, V. Detection of HER2 from Haematoxylin-Eosin Slides Through a Cascade of Deep Learning Classifiers via Multi-Instance Learning. J. Imaging 2020, 6, 82. [Google Scholar] [CrossRef]
  14. Khoshdel, V.; Asefi, M.; Ashraf, A.; LoVetri, J. Full 3D Microwave Breast Imaging Using a Deep-Learning Technique. J. Imaging 2020, 6, 80. [Google Scholar] [CrossRef]
  15. Dupont, G.; Kalinicheva, E.; Sublime, J.; Rossant, F.; Pâques, M. Analyzing Age-Related Macular Degeneration Progression in Patients with Geographic Atrophy Using Joint Autoencoders for Unsupervised Change Detection. J. Imaging 2020, 6, 57. [Google Scholar] [CrossRef]
  16. Almeida, M.A.M.; Santos, I.A.X. Classification Models for Skin Tumor Detection Using Texture Analysis in Medical Images. J. Imaging 2020, 6, 51. [Google Scholar] [CrossRef]
  17. Tang, M.; Kumar, P.; Chen, H.; Shrivastava, A. Deep Multimodal Learning for the Diagnosis of Autism Spectrum Disorder. J. Imaging 2020, 6, 47. [Google Scholar] [CrossRef]
  18. Pintelas, E.; Liaskos, M.; Livieris, I.E.; Kotsiantis, S.; Pintelas, P. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction. J. Imaging 2020, 6, 37. [Google Scholar] [CrossRef]
  19. Kieu, S.T.H.; Bade, A.; Hijazi, M.H.A.; Kolivand, H. A Survey of Deep Learning for Lung Disease Detection on Medical Images: State-of-the-Art, Taxonomy, Issues and Future Directions. J. Imaging 2020, 6, 131. [Google Scholar] [CrossRef]
  20. Debelee, T.G.; Kebede, S.R.; Schwenker, F.; Shewarega, Z.M. Deep Learning in Selected Cancers’ Image Analysis—A Survey. J. Imaging 2020, 6, 121. [Google Scholar] [CrossRef]
  21. Aruleba, K.; Obaido, G.; Ogbuokiri, B.; Fadaka, A.O.; Klein, A.; Adekiya, T.A.; Aruleba, R.T. Applications of Computational Methods in Biomedical Breast Cancer Imaging Diagnostics: A Review. J. Imaging 2020, 6, 105. [Google Scholar] [CrossRef]
  22. Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable Deep Learning Models in Medical Image Analysis. J. Imaging 2020, 6, 52. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Gorriz, J.M.; Dong, Z. Deep Learning in Medical Image Analysis. J. Imaging 2021, 7, 74. https://doi.org/10.3390/jimaging7040074

AMA Style

Zhang Y, Gorriz JM, Dong Z. Deep Learning in Medical Image Analysis. Journal of Imaging. 2021; 7(4):74. https://doi.org/10.3390/jimaging7040074

Chicago/Turabian Style

Zhang, Yudong, Juan Manuel Gorriz, and Zhengchao Dong. 2021. "Deep Learning in Medical Image Analysis" Journal of Imaging 7, no. 4: 74. https://doi.org/10.3390/jimaging7040074

APA Style

Zhang, Y., Gorriz, J. M., & Dong, Z. (2021). Deep Learning in Medical Image Analysis. Journal of Imaging, 7(4), 74. https://doi.org/10.3390/jimaging7040074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop