Medical Image Classification
Share This Topical Collection
Editors
Prof. Dr. Sheryl Berlin Brahnam
Prof. Dr. Sheryl Berlin Brahnam
E-Mail
Website
Collection Editor
Information Technology& Cybersecurity Department, Missouri State University, Springfield, 901 South National Avenue, Springfield, MO 65804, USA
Interests: deep learning (ensembles of deep learners); medical image classification (general-purpose image classifiers, neonatal pain detection); biometrics systems (fingerprint classification and recognition, face recognition)
Special Issues, Collections and Topics in MDPI journals
Dr. Loris Nanni
Dr. Loris Nanni
E-Mail
Website
Collection Editor
Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy
Interests: deep learning (ensembles of deep learners, transfer learning); computer vision (general-purpose image classifiers, medical image classification, texture descriptors); biometrics systems (fingerprint classification and recognition, signature verification, face recognition)
Special Issues, Collections and Topics in MDPI journals
Dr. Rick Brattin
Dr. Rick Brattin
E-Mail
Website
Collection Editor
Information Tech and Cybersecurity, Missouri State University, Springfield, 901 South National Avenue, Springfield, MO 65804, USA
Interests: Creating business value with data analytics and information-producing technologies; image, text, and sound classification
Topical Collection Information
Dear Colleagues,
Given the plethora of new instruments and sensors (some now attached to cell phones), medical data are accumulating at an unprecedented rate, and there is no reason to believe that the complexity and the amount of data will do anything other than continue to snowball. More than ever, machine learning algorithms are needed to realize the potential for medical science that is embedded in this avalanche of data.
The purpose of this TC is to collect studies representing the state-of-the-art in medical image classification from many different modalities: computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), functional MRI (fMRI), electroencephalography (EEG), etc. Multimodal systems are especially, though not exclusively, solicited.
Prof. Dr. Sheryl Berlin Brahnam
Prof. Dr. Loris Nanni
Dr. Rick Brattin
Collection Editors
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript.
The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs).
Submitted papers should be well formatted and use good English. Authors may use MDPI's
English editing service prior to publication or during author revisions.
Keywords
- biomedical image analysis
- generative adversarial networks
- multimodal images
- deep learners
- convolutional neural networks
- image segmentation
- feature learning
- image augmentation
- ensembles
- descriptors
Published Papers (10 papers)
Open AccessArticle
Multi-Perspective Hierarchical Deep-Fusion Learning Framework for Lung Nodule Classification
by
Kazim Sekeroglu and Ömer Muhammet Soysal
Cited by 4 | Viewed by 1948
Abstract
Lung cancer is the leading cancer type that causes mortality in both men and women. Computer-aided detection (CAD) and diagnosis systems can play a very important role for helping physicians with cancer treatments. This study proposes a hierarchical deep-fusion learning scheme in a
[...] Read more.
Lung cancer is the leading cancer type that causes mortality in both men and women. Computer-aided detection (CAD) and diagnosis systems can play a very important role for helping physicians with cancer treatments. This study proposes a hierarchical deep-fusion learning scheme in a CAD framework for the detection of nodules from computed tomography (CT) scans. In the proposed hierarchical approach, a decision is made at each level individually employing the decisions from the previous level. Further, individual decisions are computed for several perspectives of a volume of interest. This study explores three different approaches to obtain decisions in a hierarchical fashion. The first model utilizes raw images. The second model uses a single type of feature image having salient content. The last model employs multi-type feature images. All models learn the parameters by means of supervised learning. The proposed CAD frameworks are tested using lung CT scans from the LIDC/IDRI database. The experimental results showed that the proposed multi-perspective hierarchical fusion approach significantly improves the performance of the classification. The proposed hierarchical deep-fusion learning model achieved a sensitivity of 95% with only 0.4 fp/scan.
Full article
►▼
Show Figures
Open AccessArticle
Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets
by
Loris Nanni, Sheryl Brahnam, Michelangelo Paci and Stefano Ghidoni
Cited by 14 | Viewed by 4620
Abstract
CNNs and other deep learners are now state-of-the-art in medical imaging research. However, the small sample size of many medical data sets dampens performance and results in overfitting. In some medical areas, it is simply too labor-intensive and expensive to amass images numbering
[...] Read more.
CNNs and other deep learners are now state-of-the-art in medical imaging research. However, the small sample size of many medical data sets dampens performance and results in overfitting. In some medical areas, it is simply too labor-intensive and expensive to amass images numbering in the hundreds of thousands. Building Deep CNN ensembles of pre-trained CNNs is one powerful method for overcoming this problem. Ensembles combine the outputs of multiple classifiers to improve performance. This method relies on the introduction of diversity, which can be introduced on many levels in the classification workflow. A recent ensembling method that has shown promise is to vary the activation functions in a set of CNNs or within different layers of a single CNN. This study aims to examine the performance of both methods using a large set of twenty activations functions, six of which are presented here for the first time: 2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The proposed method was tested on fifteen medical data sets representing various classification tasks. The best performing ensemble combined two well-known CNNs (VGG16 and ResNet50) whose standard ReLU activation layers were randomly replaced with another. Results demonstrate the superiority in performance of this approach.
Full article
►▼
Show Figures
Open AccessArticle
Impact of Lung Segmentation on the Diagnosis and Explanation of COVID-19 in Chest X-ray Images
by
Lucas O. Teixeira, Rodolfo M. Pereira, Diego Bertolini, Luiz S. Oliveira, Loris Nanni, George D. C. Cavalcanti and Yandre M. G. Costa
Cited by 86 | Viewed by 8019
Abstract
COVID-19 frequently provokes pneumonia, which can be diagnosed using imaging exams. Chest X-ray (CXR) is often useful because it is cheap, fast, widespread, and uses less radiation. Here, we demonstrate the impact of lung segmentation in COVID-19 identification using CXR images and evaluate
[...] Read more.
COVID-19 frequently provokes pneumonia, which can be diagnosed using imaging exams. Chest X-ray (CXR) is often useful because it is cheap, fast, widespread, and uses less radiation. Here, we demonstrate the impact of lung segmentation in COVID-19 identification using CXR images and evaluate which contents of the image influenced the most. Semantic segmentation was performed using a U-Net CNN architecture, and the classification using three CNN architectures (VGG, ResNet, and Inception). Explainable Artificial Intelligence techniques were employed to estimate the impact of segmentation. A three-classes database was composed: lung opacity (pneumonia), COVID-19, and normal. We assessed the impact of creating a CXR image database from different sources, and the COVID-19 generalization from one source to another. The segmentation achieved a Jaccard distance of 0.034 and a Dice coefficient of 0.982. The classification using segmented images achieved an F1-Score of 0.88 for the multi-class setup, and 0.83 for COVID-19 identification. In the cross-dataset scenario, we obtained an F1-Score of 0.74 and an area under the ROC curve of 0.9 for COVID-19 identification using segmented images. Experiments support the conclusion that even after segmentation, there is a strong bias introduced by underlying factors from different sources.
Full article
►▼
Show Figures
Open AccessArticle
Medical Augmentation (Med-Aug) for Optimal Data Augmentation in Medical Deep Learning Networks
by
Justin Lo, Jillian Cardinell, Alejo Costanzo and Dafna Sussman
Cited by 6 | Viewed by 4103
Abstract
Deep learning (DL) algorithms have become an increasingly popular choice for image classification and segmentation tasks; however, their range of applications can be limited. Their limitation stems from them requiring ample data to achieve high performance and adequate generalizability. In the case of
[...] Read more.
Deep learning (DL) algorithms have become an increasingly popular choice for image classification and segmentation tasks; however, their range of applications can be limited. Their limitation stems from them requiring ample data to achieve high performance and adequate generalizability. In the case of clinical imaging data, images are not always available in large quantities. This issue can be alleviated by using data augmentation (DA) techniques. The choice of DA is important because poor selection can possibly hinder the performance of a DL algorithm. We propose a DA policy search algorithm that offers an extended set of transformations that accommodate the variations in biomedical imaging datasets. The algorithm makes use of the efficient and high-dimensional optimizer Bi-Population Covariance Matrix Adaptation Evolution Strategy (BIPOP-CMA-ES) and returns an optimal DA policy based on any input imaging dataset and a DL algorithm. Our proposed algorithm, Medical Augmentation (Med-Aug), can be implemented by other researchers in related medical DL applications to improve their model’s performance. Furthermore, we present our found optimal DA policies for a variety of medical datasets and popular segmentation networks for other researchers to use in related tasks.
Full article
►▼
Show Figures
Open AccessArticle
Co-Density Distribution Maps for Advanced Molecule Colocalization and Co-Distribution Analysis
by
Ilaria De Santis, Luca Lorenzini, Marzia Moretti, Elisa Martella, Enrico Lucarelli, Laura Calzà and Alessandro Bevilacqua
Cited by 2 | Viewed by 3320
Abstract
Cellular and subcellular spatial colocalization of structures and molecules in biological specimens is an important indicator of their co-compartmentalization and interaction. Presently, colocalization in biomedical images is addressed with visual inspection and quantified by co-occurrence and correlation coefficients. However, such measures alone cannot
[...] Read more.
Cellular and subcellular spatial colocalization of structures and molecules in biological specimens is an important indicator of their co-compartmentalization and interaction. Presently, colocalization in biomedical images is addressed with visual inspection and quantified by co-occurrence and correlation coefficients. However, such measures alone cannot capture the complexity of the interactions, which does not limit itself to signal intensity. On top of the previously developed density distribution maps (DDMs), here, we present a method for advancing current colocalization analysis by introducing co-density distribution maps (cDDMs), which, uniquely, provide information about molecules absolute and relative position and local abundance. We exemplify the benefits of our method by developing cDDMs-integrated pipelines for the analysis of molecules pairs co-distribution in three different real-case image datasets. First, cDDMs are shown to be indicators of colocalization and degree, able to increase the reliability of correlation coefficients currently used to detect the presence of colocalization. In addition, they provide a simultaneously visual and quantitative support, which opens for new investigation paths and biomedical considerations. Finally, thanks to the
coDDMaker software we developed, cDDMs become an enabling tool for the quasi real time monitoring of experiments and a potential improvement for a large number of biomedical studies.
Full article
►▼
Show Figures
Open AccessArticle
Detection of Diabetic Eye Disease from Retinal Images Using a Deep Learning Based CenterNet Model
by
Tahira Nazir, Marriam Nawaz, Junaid Rashid, Rabbia Mahum, Momina Masood, Awais Mehmood, Farooq Ali, Jungeun Kim, Hyuk-Yoon Kwon and Amir Hussain
Cited by 75 | Viewed by 8548
Abstract
Diabetic retinopathy (DR) is an eye disease that alters the blood vessels of a person suffering from diabetes. Diabetic macular edema (DME) occurs when DR affects the macula, which causes fluid accumulation in the macula. Efficient screening systems require experts to manually analyze
[...] Read more.
Diabetic retinopathy (DR) is an eye disease that alters the blood vessels of a person suffering from diabetes. Diabetic macular edema (DME) occurs when DR affects the macula, which causes fluid accumulation in the macula. Efficient screening systems require experts to manually analyze images to recognize diseases. However, due to the challenging nature of the screening method and lack of trained human resources, devising effective screening-oriented treatment is an expensive task. Automated systems are trying to cope with these challenges; however, these methods do not generalize well to multiple diseases and real-world scenarios. To solve the aforementioned issues, we propose a new method comprising two main steps. The first involves dataset preparation and feature extraction and the other relates to improving a custom deep learning based CenterNet model trained for eye disease classification. Initially, we generate annotations for suspected samples to locate the precise region of interest, while the other part of the proposed solution trains the Center Net model over annotated images. Specifically, we use DenseNet-100 as a feature extraction method on which the one-stage detector, CenterNet, is employed to localize and classify the disease lesions. We evaluated our method over challenging datasets, namely, APTOS-2019 and IDRiD, and attained average accuracy of 97.93% and 98.10%, respectively. We also performed cross-dataset validation with benchmark EYEPACS and Diaretdb1 datasets. Both qualitative and quantitative results demonstrate that our proposed approach outperforms state-of-the-art methods due to more effective localization power of CenterNet, as it can easily recognize small lesions and deal with over-fitted training data. Our proposed framework is proficient in correctly locating and classifying disease lesions. In comparison to existing DR and DME classification approaches, our method can extract representative key points from low-intensity and noisy images and accurately classify them. Hence our approach can play an important role in automated detection and recognition of DR and DME lesions.
Full article
►▼
Show Figures
Open AccessArticle
Cross Attention Squeeze Excitation Network (CASE-Net) for Whole Body Fetal MRI Segmentation
by
Justin Lo, Saiee Nithiyanantham, Jillian Cardinell, Dylan Young, Sherwin Cho, Abirami Kirubarajan, Matthias W. Wagner, Roxana Azma, Steven Miller, Mike Seed, Birgit Ertl-Wagner and Dafna Sussman
Cited by 9 | Viewed by 3132
Abstract
Segmentation of the fetus from 2-dimensional (2D) magnetic resonance imaging (MRI) can aid radiologists with clinical decision making for disease diagnosis. Machine learning can facilitate this process of automatic segmentation, making diagnosis more accurate and user independent. We propose a deep learning (DL)
[...] Read more.
Segmentation of the fetus from 2-dimensional (2D) magnetic resonance imaging (MRI) can aid radiologists with clinical decision making for disease diagnosis. Machine learning can facilitate this process of automatic segmentation, making diagnosis more accurate and user independent. We propose a deep learning (DL) framework for 2D fetal MRI segmentation using a Cross Attention Squeeze Excitation Network (CASE-Net) for research and clinical applications. CASE-Net is an end-to-end segmentation architecture with relevant modules that are evidence based. The goal of CASE-Net is to emphasize localization of contextual information that is relevant in biomedical segmentation, by combining attention mechanisms with squeeze-and-excitation (SE) blocks. This is a retrospective study with 34 patients. Our experiments have shown that our proposed CASE-Net achieved the highest segmentation Dice score of 87.36%, outperforming other competitive segmentation architectures.
Full article
►▼
Show Figures
Open AccessEditor’s ChoiceArticle
Hemorrhage Detection Based on 3D CNN Deep Learning Framework and Feature Fusion for Evaluating Retinal Abnormality in Diabetic Patients
by
Sarmad Maqsood, Robertas Damaševičius and Rytis Maskeliūnas
Cited by 56 | Viewed by 5408
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new
[...] Read more.
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods.
Full article
►▼
Show Figures
Open AccessArticle
Multi-Dimension and Multi-Feature Hybrid Learning Network for Classifying the Sub Pathological Type of Lung Nodules through LDCT
by
Jiacheng Fan, Jianying Bao, Jianlin Xu and Jinqiu Mo
Cited by 2 | Viewed by 1910
Abstract
In order to develop appropriate treatment and rehabilitation plans with regard to different subpathological types (PILs and IAs) of lung nodules, it is important to diagnose them through low-dose spiral computed tomography (LDCT) during routine screening before surgery. Based on the characteristics of
[...] Read more.
In order to develop appropriate treatment and rehabilitation plans with regard to different subpathological types (PILs and IAs) of lung nodules, it is important to diagnose them through low-dose spiral computed tomography (LDCT) during routine screening before surgery. Based on the characteristics of different subpathological lung nodules expressed from LDCT images, we propose a multi-dimension and multi-feature hybrid learning neural network in this paper. Our network consists of a 2D network part and a 3D network part. The feature vectors extracted from the 2D network and 3D network are further learned by XGBoost. Through this formation, the network can better integrate the feature information from the 2D and 3D networks. The main learning block of the network is a residual block combined with attention mechanism. This learning block enables the network to learn better from multiple features and pay more attention to the key feature map among all the feature maps in different channels. We conduct experiments on our dataset collected from a cooperating hospital. The results show that the accuracy, sensitivity and specificity of our network are 83%, 86%, 80%, respectively It is feasible to use this network to classify the subpathological type of lung nodule through routine screening.
Full article
►▼
Show Figures
Open AccessArticle
Autosegmentation of Prostate Zones and Cancer Regions from Biparametric Magnetic Resonance Images by Using Deep-Learning-Based Neural Networks
by
Chih-Ching Lai, Hsin-Kai Wang, Fu-Nien Wang, Yu-Ching Peng, Tzu-Ping Lin, Hsu-Hsia Peng and Shu-Huei Shen
Cited by 17 | Viewed by 3069
Abstract
The accuracy in diagnosing prostate cancer (PCa) has increased with the development of multiparametric magnetic resonance imaging (mpMRI). Biparametric magnetic resonance imaging (bpMRI) was found to have a diagnostic accuracy comparable to mpMRI in detecting PCa. However, prostate MRI assessment relies on human
[...] Read more.
The accuracy in diagnosing prostate cancer (PCa) has increased with the development of multiparametric magnetic resonance imaging (mpMRI). Biparametric magnetic resonance imaging (bpMRI) was found to have a diagnostic accuracy comparable to mpMRI in detecting PCa. However, prostate MRI assessment relies on human experts and specialized training with considerable inter-reader variability. Deep learning may be a more robust approach for prostate MRI assessment. Here we present a method for autosegmenting the prostate zone and cancer region by using SegNet, a deep convolution neural network (DCNN) model. We used PROSTATEx dataset to train the model and combined different sequences into three channels of a single image. For each subject, all slices that contained the transition zone (TZ), peripheral zone (PZ), and PCa region were selected. The datasets were produced using different combinations of images, including T2-weighted (T2W) images, diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) images. Among these groups, the T2W + DWI + ADC images exhibited the best performance with a dice similarity coefficient of 90.45% for the TZ, 70.04% for the PZ, and 52.73% for the PCa region. Image sequence analysis with a DCNN model has the potential to assist PCa diagnosis.
Full article
►▼
Show Figures