applsci-logo

Journal Browser

Journal Browser

AI Technology in Medical Image Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 26067

Special Issue Editors


E-Mail Website
Guest Editor
1. PhysCon Lab., University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary
2. Computational Intelligence Research Group, Sapientia Hungarian University of Transylvania, 540485 Targu Mures, Romania
Interests: computer science; image processing; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
PhysCon Lab., University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary
Interests: computer science; modeling and control of physiological systems; image processing; advanced non-linear control; human-computer interaction; physiological big data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The amount of medical image data collected by medical imaging devices deployed in clinical practice is growing day by day. The number of medical experts who can reliably evaluate these images cannot follow this trend, mainly because of the high training costs. Consequently, there is a strong need for automated image processing methods and procedures, which can preprocess the collected images and draw the attention of the medical staff to the cases suspected of containing any abnormal features. The final word regarding diagnosis belongs to the medical expert.

Artificial intelligence represents the foundation of decision support systems involved in medical image-based diagnosis. All processing steps, from image creation to quality enhancement, registration, segmentation, and interpretation, are usually assisted by machine intelligence.

This Special Issue will publish high-quality, original research papers that bring advancements to automated medical diagnosis support, involving the diagnosis of any abnormal condition of the human organism, based any medical imaging modalities, either whole diagnosis procedures or relevant improvements to any processing steps of previously reported methodologies.

Dr. László Szilágyi
Prof. Dr. Levente Kovács
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • image segmentation
  • pattern recognition
  • feature generation
  • feature ranking and selection
  • classification methods
  • neural networks, convolutional neural networks
  • deep learning
  • medical diagnosis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

5 pages, 161 KiB  
Editorial
Special Issue: Artificial Intelligence Technology in Medical Image Analysis
by László Szilágyi and Levente Kovács
Appl. Sci. 2024, 14(5), 2180; https://doi.org/10.3390/app14052180 - 5 Mar 2024
Viewed by 1754
Abstract
Artificial intelligence (AI) technologies have significantly advanced the field of medical imaging, revolutionizing diagnostic and therapeutic processes [...] Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)

Research

Jump to: Editorial

12 pages, 3484 KiB  
Article
Investigating Effective Data Augmentation Techniques for Accurate Gastric Classification in the Development of a Deep Learning-Based Computer-Aided Diagnosis System
by Jae-beom Park, Han-sung Lee and Hyun-chong Cho
Appl. Sci. 2023, 13(22), 12325; https://doi.org/10.3390/app132212325 - 14 Nov 2023
Viewed by 1298
Abstract
Gastric cancer is a significant health concern, particularly in Korea, and its accurate detection is crucial for effective treatment. However, a gastroscopic biopsy can be time-consuming and may, thus, delay diagnosis and treatment. Thus, this study proposed a gastric cancer diagnostic method, CADx, [...] Read more.
Gastric cancer is a significant health concern, particularly in Korea, and its accurate detection is crucial for effective treatment. However, a gastroscopic biopsy can be time-consuming and may, thus, delay diagnosis and treatment. Thus, this study proposed a gastric cancer diagnostic method, CADx, to facilitate a more efficient image analysis. Owing to the challenges in collecting medical image data, small datasets are often used in this field. To overcome this limitation, we used AutoAugment’s ImageNet policy and applied cut-and-paste techniques using a sliding window algorithm to further increase the size of the dataset. The results showed an accuracy of 0.8317 for T-stage 1 and T-stage 4 image classification and an accuracy of 0.8417 for early gastric cancer and normal image classification, indicating improvements of 7 and 9%, respectively. Furthermore, through the application of test-time augmentation to the early gastric cancer and normal image datasets, the image classification accuracy was improved by 5.8% to 0.9000. Overall, the results of this study demonstrate the effectiveness of the proposed augmentation methods for enhancing gastric cancer classification performance. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

16 pages, 3526 KiB  
Article
Fully Automatic Thoracic Cavity Segmentation in Dynamic Contrast Enhanced Breast MRI Using Deep Convolutional Neural Networks
by Marco Berchiolli, Susann Wolfram, Wamadeva Balachandran and Tat-Hean Gan
Appl. Sci. 2023, 13(18), 10160; https://doi.org/10.3390/app131810160 - 9 Sep 2023
Viewed by 1061
Abstract
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is regarded as one of the main diagnostic tools for breast cancer. Several methodologies have been developed to automatically localize suspected malignant breast lesions. Changes in tissue appearance in response to the injection of the contrast [...] Read more.
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is regarded as one of the main diagnostic tools for breast cancer. Several methodologies have been developed to automatically localize suspected malignant breast lesions. Changes in tissue appearance in response to the injection of the contrast agent (CA) are indicative of the presence of malignant breast lesions. However, these changes are extremely similar to the ones of internal organs, such as the heart. Thus, the task of chest cavity segmentation is necessary for the development of lesion detection. In this work, a data-efficient approach is proposed, to automatically segment breast MRI data. Specifically, a study on several UNet-like architectures (Dynamic UNet) based on ResNet is presented. Experiments quantify the impact of several additions to baseline models of varying depth, such as self-attention and the presence of a bottlenecked connection. The proposed methodology is demonstrated to outperform the current state of the art both in terms of data efficiency and in terms of similarity index when compared to manually segmented data. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

19 pages, 6924 KiB  
Article
Enhancing an Imbalanced Lung Disease X-ray Image Classification with the CNN-LSTM Model
by Julio Fachrel, Anindya Apriliyanti Pravitasari, Intan Nurma Yulita, Mulya Nurmansyah Ardhisasmita and Fajar Indrayatna
Appl. Sci. 2023, 13(14), 8227; https://doi.org/10.3390/app13148227 - 15 Jul 2023
Cited by 2 | Viewed by 1857
Abstract
Lung diseases have a significant impact on respiratory health, causing various symptoms and posing challenges in diagnosis and treatment. This research presents a methodology for classifying lung diseases using chest X-ray images, specifically focusing on COVID-19, pneumonia, and normal cases. The study introduces [...] Read more.
Lung diseases have a significant impact on respiratory health, causing various symptoms and posing challenges in diagnosis and treatment. This research presents a methodology for classifying lung diseases using chest X-ray images, specifically focusing on COVID-19, pneumonia, and normal cases. The study introduces an optimal architecture for convolutional neural network (CNN) and long short-term memory (LSTM) models, considering evaluation metrics and training efficiency. Furthermore, the issue of imbalanced datasets is addressed through the application of some image augmentation techniques to enhance model performance. The most effective model comprises five convolutional blocks, two LSTM layers, and no augmentation, achieving an impressive F1 score of 0.9887 with a training duration of 91 s per epoch. Misclassifications primarily occurred in normal cases, accounting for only 3.05% of COVID-19 data. The pneumonia class demonstrated excellent precision, while the normal class exhibited high recall and an F1 score. Comparatively, the CNN-LSTM model outperformed the CNN model in accurately classifying chest X-ray images and identifying infected lungs. This research provides valuable insights for improving lung disease diagnosis, enabling timely and accurate identification of lung diseases, and ultimately enhancing patients’ outcomes. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

20 pages, 1394 KiB  
Article
A Histopathological Image Classification Method Based on Model Fusion in the Weight Space
by Gang Zhang, Zhi-Fei Lai, Yi-Qun Chen, Hong-Tao Liu and Wei-Jun Sun
Appl. Sci. 2023, 13(12), 7009; https://doi.org/10.3390/app13127009 - 10 Jun 2023
Viewed by 1246
Abstract
Automatic classification of histopathological images plays an important role in computer-aided diagnosis systems. The automatic classification model of histopathological images based on deep neural networks has received widespread attention. However, the performance of deep models is affected by many factors, such as training [...] Read more.
Automatic classification of histopathological images plays an important role in computer-aided diagnosis systems. The automatic classification model of histopathological images based on deep neural networks has received widespread attention. However, the performance of deep models is affected by many factors, such as training hyperparameters, model structure, dataset quality, and training cost. In order to reduce the impact of the above factors on model training and reduce the training and inference costs of the model, we propose a novel method based on model fusion in the weight space, which is inspired by stochastic weight averaging and model soup. We use the cyclical learning rate (CLR) strategy to fine-tune the ingredient models and propose a ranking strategy based on accuracy and diversity for candidate model selection. Compared to the single model, the weight fusion of ingredient models can obtain a model whose performance is closer to the expected value of the error basin, which may improve the generalization ability of the model. Compared to the ensemble model with n base models, the testing cost of the proposed model is theoretically 1/n of that of the ensemble model. Experimental results on two histopathological image datasets show the effectiveness of the proposed model in comparison to baseline ones, including ResNet, VGG, DenseNet, and their ensemble versions. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

14 pages, 2007 KiB  
Article
Semi-Supervised Medical Image Classification Combined with Unsupervised Deep Clustering
by Bang Xiao and Chunyue Lu
Appl. Sci. 2023, 13(9), 5520; https://doi.org/10.3390/app13095520 - 28 Apr 2023
Cited by 1 | Viewed by 2221
Abstract
An effective way to improve the performance of deep neural networks in most computer vision tasks is to improve the quantity of labeled data and the quality of labels. However, in the analysis and processing of medical images, high-quality annotation depends on the [...] Read more.
An effective way to improve the performance of deep neural networks in most computer vision tasks is to improve the quantity of labeled data and the quality of labels. However, in the analysis and processing of medical images, high-quality annotation depends on the experience and professional knowledge of experts, which makes it very difficult to obtain a large number of high-quality annotations. Therefore, we propose a new semi-supervised framework for medical image classification. It combines semi-supervised classification with unsupervised deep clustering. Spreading label information to unlabeled data by alternately running two tasks helps the model to extract semantic information from unlabeled data, and prevents the model from overfitting to a small amount of labeled data. Compared with current methods, our framework enhances the robustness of the model and reduces the influence of outliers. We conducted a comparative experiment on the public benchmark medical image dataset to verify our method. On the ISIC 2018 Dataset, our method surpasses other methods by more than 0.85% on AUC and 1.08% on Sensitivity. On the ICIAR BACH 2018 dataset, our method achieved 94.12% AUC, 77.92% F1-score, 77.69% Recall, and 78.16% Precision. The error rate is at least 1.76% lower than that of other methods. The result shows the effectiveness of our method in medical image classification. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

21 pages, 6881 KiB  
Article
Thalamus Segmentation Using Deep Learning with Diffusion MRI Data: An Open Benchmark
by Gustavo Retuci Pinheiro, Lorenza Brusini, Diedre Carmo, Renata Prôa, Thays Abreu, Simone Appenzeller, Gloria Menegaz and Leticia Rittner
Appl. Sci. 2023, 13(9), 5284; https://doi.org/10.3390/app13095284 - 23 Apr 2023
Viewed by 2903
Abstract
The thalamus is a subcortical brain structure linked to the motor system. Since certain changes within this structure are related to diseases, such as multiple sclerosis and Parkinson’s, the characterization of the thalamus—e.g., shape assessment—is a crucial step in relevant studies and applications, [...] Read more.
The thalamus is a subcortical brain structure linked to the motor system. Since certain changes within this structure are related to diseases, such as multiple sclerosis and Parkinson’s, the characterization of the thalamus—e.g., shape assessment—is a crucial step in relevant studies and applications, including medical research and surgical planning. A robust and reliable thalamus-segmentation method is therefore, required to meet these demands. Despite presenting low contrast for this particular structure, T1-weighted imaging is still the most common MRI sequence for thalamus segmentation. However, diffusion MRI (dMRI) captures different micro-structural details of the biological tissue and reveals more contrast of the thalamic borders, thereby serving as a better candidate for thalamus-segmentation methods. Accordingly, we propose a baseline multimodality thalamus-segmentation pipeline that combines dMRI and T1-weighted images within a CNN approach, achieving state-of-the-art levels of Dice overlap. Furthermore, we are hosting an open benchmark with a large, preprocessed, publicly available dataset that includes co-registered, T1-weighted, dMRI, manual thalamic masks; masks generated by three distinct automated methods; and a STAPLE consensus of the masks. The dataset, code, environment, and instructions for the benchmark leaderboard can be found on our GitHub and CodaLab. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

18 pages, 3435 KiB  
Article
Automatic Tumor Identification from Scans of Histopathological Tissues
by Mantas Kundrotas, Edita Mažonienė and Dmitrij Šešok
Appl. Sci. 2023, 13(7), 4333; https://doi.org/10.3390/app13074333 - 29 Mar 2023
Cited by 2 | Viewed by 1706
Abstract
Latest progress in development of artificial intelligence (AI), especially machine learning (ML), allows to develop automated technologies that can eliminate or at least reduce human errors in analyzing health data. Due to the ethics of usage of AI in pathology and laboratory medicine, [...] Read more.
Latest progress in development of artificial intelligence (AI), especially machine learning (ML), allows to develop automated technologies that can eliminate or at least reduce human errors in analyzing health data. Due to the ethics of usage of AI in pathology and laboratory medicine, to the present day, pathologists analyze slides of histopathologic tissues that are stained with hematoxylin and eosin under the microscope; by law it cannot be substituted and must go under visual observation, as pathologists are fully accountable for the result. However, a profuse number of automated systems could solve complex problems that require an extremely fast response, accuracy, or take place on tasks that require both a fast and accurate response at the same time. Such systems that are based on ML algorithms can be adapted to work with medical imaging data, for instance whole slide images (WSIs) that allow clinicians to review a much larger number of health cases in a shorter time and give the ability to identify the preliminary stages of cancer or other diseases improving health monitoring strategies. Moreover, the increased opportunity to forecast and take control of the spread of global diseases could help to create a preliminary analysis and viable solutions. Accurate identification of a tumor, especially at an early stage, requires extensive expert knowledge, so often the cancerous tissue is identified only after experiencing its side effects. The main goal of our study was to expand the ability to find more accurate ML methods and techniques that can lead to detecting tumor damaged tissues in histopathological WSIs. According to the experiments that we conducted, there was a 1% AUC difference between the training and test datasets. Over several training iterations, the U-Net model was able to reduce the model size by almost twice while also improving accuracy from 0.95491 to 0.95515 AUC. Convolutional models worked well on groups of different sizes when properly trained. With the TTA (test time augmentation) method the result improved to 0.96870, and with the addition of the multi-model ensemble, it improved to 0.96977. We found out that flaws in the models can be found and fixed by using specialized analysis techniques. A correction of the image processing parameters was sufficient to raise the AUC by almost 0.3%. The result of the individual model increased to 0.96664 AUC (a more than 1% better result than the previous best model) after additional training data preparation. This is an arduous task due to certain factors: using such systems’ applications globally needs to achieve maximum accuracy and improvement in the ethics of Al usage in medicine; furthermore if hospitals could give scientific inquiry validation, while retaining patient data anonymity with clinical information that could be systemically analyzed and improved by scientists, thereby proving Al benefits. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

12 pages, 1929 KiB  
Article
Deep-Learning Algorithms for Prescribing Insoles to Patients with Foot Pain
by Jeoung Kun Kim, Yoo Jin Choo, In Sik Park, Jin-Woo Choi, Donghwi Park and Min Cheol Chang
Appl. Sci. 2023, 13(4), 2208; https://doi.org/10.3390/app13042208 - 9 Feb 2023
Cited by 1 | Viewed by 2254
Abstract
Foot pain is a common musculoskeletal disorder. Orthotic insoles are widely used in patients with foot pain. Inexperienced clinicians have difficulty prescribing orthotic insoles appropriately by considering various factors associated with the alteration of foot alignment. We attempted to develop deep-learning algorithms that [...] Read more.
Foot pain is a common musculoskeletal disorder. Orthotic insoles are widely used in patients with foot pain. Inexperienced clinicians have difficulty prescribing orthotic insoles appropriately by considering various factors associated with the alteration of foot alignment. We attempted to develop deep-learning algorithms that can automatically prescribe orthotic insoles to patients with foot pain and assess their accuracy. In total, 838 patients were included in this study; 70% (n = 586) and 30% (n = 252) were used as the training and validation sets, respectively. The resting calcaneal stance position and data related to pelvic elevation, pelvic tilt, and pelvic rotation were used as input data for developing the deep-learning algorithms for insole prescription. The target data were the foot posture index for the modified root technique and the necessity of heel lift, entire lift, and lateral wedge, medial wedge, and calcaneocuboid arch supports. In the results, regarding the foot posture index for the modified root technique, for the left foot, the mean absolute error (MAE) and root mean square error (RMSE) of the validation dataset for the developed model were 1.408 and 3.365, respectively. For the right foot, the MAE and RMSE of the validation dataset for the developed model were 1.601 and 3.549, respectively. The accuracies for heel lift, entire lift, and lateral wedge, medial wedge, and calcaneocuboid arch supports were 89.7%, 94.8%, 72.2%, 98.4%, and 79.8%, respectively. The micro-average area under the receiver operating characteristic curves for heel lift, entire lift, and lateral wedge, medial wedge, and calcaneocuboid arch supports were 0.949, 0.941, 0.826, 0.792, and 0.827, respectively. In conclusion, our deep-learning models automatically prescribed orthotic insoles in patients with foot pain and showed outstanding to acceptable accuracy. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

12 pages, 1746 KiB  
Article
Quality Assurance of Chest X-ray Images with a Combination of Deep Learning Methods
by Daisuke Oura, Shinpe Sato, Yuto Honma, Shiho Kuwajima and Hiroyuki Sugimori
Appl. Sci. 2023, 13(4), 2067; https://doi.org/10.3390/app13042067 - 5 Feb 2023
Cited by 6 | Viewed by 2830
Abstract
Background: Chest X-ray (CXR) imaging is the most common examination; however, no automatic quality assurance (QA) system using deep learning (DL) has been established for CXR. This study aimed to construct a DL-based QA system and assess its usefulness. Method: Datasets were created [...] Read more.
Background: Chest X-ray (CXR) imaging is the most common examination; however, no automatic quality assurance (QA) system using deep learning (DL) has been established for CXR. This study aimed to construct a DL-based QA system and assess its usefulness. Method: Datasets were created using over 23,000 images from Chest-14 and clinical images. The QA system consisted of three classification models and one regression model. The classification method was used for the correction of image orientation, left–right reversal, and estimating the patient’s position, such as standing, sitting, and lying. The regression method was used for the correction of the image angle. ResNet-50, VGG-16, and the original convolutional neural network (CNN) were compared under five cross-fold evaluations. The overall accuracy of the QA system was tested using clinical images. The mean correction time of the QA system was measured. Result: ResNet-50 demonstrated higher performance in the classification. The original CNN was preferred in the regression. The orientation, angle, and left–right reversal of all images were fully corrected in all images. Moreover, patients’ positions were estimated with 96% accuracy. The mean correction time was approximately 0.4 s. Conclusion: The DL-based QA system quickly and accurately corrected CXR images. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

15 pages, 5046 KiB  
Article
Age Estimation from Brain Magnetic Resonance Images Using Deep Learning Techniques in Extensive Age Range
by Kousuke Usui, Takaaki Yoshimura, Minghui Tang and Hiroyuki Sugimori
Appl. Sci. 2023, 13(3), 1753; https://doi.org/10.3390/app13031753 - 30 Jan 2023
Cited by 6 | Viewed by 2643
Abstract
Estimation of human age is important in the fields of forensic medicine and the detection of neurodegenerative diseases of the brain. Particularly, the age estimation methods using brain magnetic resonance (MR) images are greatly significant because these methods not only are noninvasive but [...] Read more.
Estimation of human age is important in the fields of forensic medicine and the detection of neurodegenerative diseases of the brain. Particularly, the age estimation methods using brain magnetic resonance (MR) images are greatly significant because these methods not only are noninvasive but also do not lead to radiation exposure. Although several age estimation methods using brain MR images have already been investigated using deep learning, there are no reports involving younger subjects such as children. This study investigated the age estimation method using T1-weighted (sagittal plane) two-dimensional brain MR imaging (MRI) of 1000 subjects aged 5–79 (31.64 ± 18.04) years. This method uses a regression model based on ResNet-50, which estimates the chronological age (CA) of unknown brain MR images by training brain MR images corresponding to the CA. The correlation coefficient, coefficient of determination, mean absolute error, and root mean squared error were used as the evaluation indices of this model, and the results were 0.9643, 0.9299, 5.251, and 6.422, respectively. The present study showed the same degree of correlation as those of related studies, demonstrating that age estimation can be performed for a wide range of ages with higher estimation accuracy. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

27 pages, 17260 KiB  
Article
Multi-Class Breast Cancer Histopathological Image Classification Using Multi-Scale Pooled Image Feature Representation (MPIFR) and One-Versus-One Support Vector Machines
by David Clement, Emmanuel Agu, Muhammad A. Suleiman, John Obayemi, Steve Adeshina and Wole Soboyejo
Appl. Sci. 2023, 13(1), 156; https://doi.org/10.3390/app13010156 - 22 Dec 2022
Cited by 12 | Viewed by 3038
Abstract
Breast cancer (BC) is currently the most common form of cancer diagnosed worldwide with an incidence estimated at 2.26 million in 2020. Additionally, BC is the leading cause of cancer death. Many subtypes of breast cancer exist with distinct biological features and which [...] Read more.
Breast cancer (BC) is currently the most common form of cancer diagnosed worldwide with an incidence estimated at 2.26 million in 2020. Additionally, BC is the leading cause of cancer death. Many subtypes of breast cancer exist with distinct biological features and which respond differently to various treatment modalities and have different clinical outcomes. To ensure that sufferers receive lifesaving patients-tailored treatment early, it is crucial to accurately distinguish dangerous malignant (ductal carcinoma, lobular carcinoma, mucinous carcinoma, and papillary carcinoma) subtypes of tumors from adenosis, fibroadenoma, phyllodes tumor, and tubular adenoma benign harmless subtypes. An excellent automated method for detecting malignant subtypes of tumors is desirable since doctors do not identify 10% to 30% of breast cancers during regular examinations. While several computerized methods for breast cancer classification have been proposed, deep convolutional neural networks (DCNNs) have demonstrated superior performance. In this work, we proposed an ensemble of four variants of DCNNs combined with the support vector machines classifier to classify breast cancer histopathological images into eight subtypes classes: four benign and four malignant. The proposed method utilizes the power of DCNNs to extract highly predictive multi-scale pooled image feature representation (MPIFR) from four resolutions (40×, 100×, 200×, and 400×) of BC images that are then classified using SVM. Eight pre-trained DCNN architectures (Inceptionv3, InceptionResNetv2, ResNet18, ResNet50, DenseNet201, EfficientNetb0, shuffleNet, and SqueezeNet) were individually trained and an ensemble of the four best-performing models (ResNet50, ResNet18, DenseNet201, and EfficientNetb0) was utilized for feature extraction. One-versus-one SVM classification was then utilized to model an 8-class breast cancer image classifier. Our work is novel because while some prior work has utilized CNNs for 2- and 4-class breast cancer classification, only one other prior work proposed a solution for 8-class BC histopathological image classification. A 6B-Net deep CNN model was utilized, achieving an accuracy of 90% for 8-class BC classification. In rigorous evaluation, the proposed MPIFR method achieved an average accuracy of 97.77%, with 97.48% sensitivity, and 98.45% precision on the BreakHis histopathological BC image dataset, outperforming the prior state-of-the-art for histopathological breast cancer multi-class classification and a comprehensive set of DCNN baseline models. Full article
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop