Brain Tumor Segmentation Using Deep Learning on MRI Images
Abstract
:1. Introduction
- To extract characteristics from brain MRI images containing tumors, a CNN model was created;
- On datasets containing medical images, it was discovered that the CNN layout produced better classification results;
- The feature maps are produced after numerous convolutional layers have extracted features from the input images;
- The BraTS dataset was used to create a model with 98% overall accuracy;
- Regarding computational complexity, a comparison between the proposed approach and a transfer learning-based strategy is given.
2. Literature Review
3. Dataset Collection
3.1. BraTS Dataset
3.2. Data Description
3.3. Data Preparation
3.4. Feature Extraction from the BraTS Dataset
- Imaging morphological features: Imaging morphological features may be retrieved from MRI scans to capture the form, size, and structure of distinct areas in the images. To distinguish between normal and pathological tissue and to identify various BT zones, images’ morphological characteristics can be exploited;
- Imaging gradient features: These features capture the variations in intensity values found in MRI scans and may be used to mark the borders and edges of various image areas. The boundaries between several BT zones may be located using image gradient properties;
- Image texture features: These features may be taken from MRI scans to represent the spatial distribution of intensity levels. Texture markers, such as expanding peritumoral edema, tumors, and necrosis, can be utilized to distinguish between multiple locations of the BTs;
- Image intensity features: Features for the DL model may be created using the intensity values of the various voxels in the MRI images. Using changes in intensity between normal and malignant tissue, intensity characteristics can be utilized to identify aberrant tissue, such as cancers.
3.5. Deep-Learning Features
- Deep-Learning Features: In ML, particularly in the area of computer vision, DL features are a sort of feature extraction technique. DL features are learned automatically from data via the training process of a DNN, such as a CNN;
- Fully connected layer: After several iterations of convolution, pooling, and nonlinear activation, the network’s ultimate output is a collection of fully connected layers. These layers utilize the learned characteristics to predict things about the images, such as if a BT is present;
- Pooling: The next stage, known as pooling, is a downsampling technique that shrinks the feature maps’ spatial dimensions. The purpose of pooling is to lower the computational expense and increase the features’ resiliency to slight translations and deformations in the image;
- Repeat: To extract more complex features from the image, convolution, pooling, and nonlinear activation is performed several times, each time using larger and more complicated kernels;
- Nonlinear activation: Following pooling, a nonlinear activation function, such as a rectified linear unit (ReLU), is applied to the feature maps to incorporate nonlinearity into the features and to enable the network to learn intricate correlations between the pixels in the image;
- Convolution: The initial stage in DL feature extraction is convolution on the input image, which entails swiping a tiny kernel over the image and executing element-wise multiplications between the kernel and the corresponding pixels in the image. A series of feature maps that accurately depict the regional connections among the image’s pixels are the result of the convolution procedure.
4. Methodology
- Preprocessing: The brain’s MRI images from the BraTS dataset are multimodal. To retrieve the necessary data for tumor segmentation, these images need to be preprocessed. The usual steps for doing this include registering the images in a shared space, resampling to the same resolution, and leveling the intensities;
- Data preparation: Divide the data into training, validation, and testing sets as part of the data preparation process. The training set, validation set, and testing sets may all be used to train the model, fine-tune its hyperparameters, and assess the model’s effectiveness;
- Model choice: Decide on an appropriate model for BT segmentation. CNNs have been demonstrated to perform effectively in this job. One may either modify a pre-trained model such as VGG16 or ResNet for a particular application, or one can create one’s own CNN architecture from the start;
- Model training: Train the chosen model using the training set of data. To evaluate the model’s performance, one must select a loss function, an optimizer, and a measure. Depending on the nature of the issue, it is necessary to select a loss function; for binary segmentation, this can be binary cross-entropy;
- Fine-tuning of the hyperparameters: Use the validation set to adjust the model’s hyperparameters. The performance of the model will be enhanced as a result;
- Model evaluation: Examine how well the model performed on the testing set. The performance may be assessed using measures such as the dice coefficient, Jaccard index, or IoU.
4.1. Deep CNN Model for BT Segmentation
4.2. Data Gathering for Model Training
4.3. Hyperparameter Tuning
5. Results
5.1. Model Summary
5.2. Model Evaluation
- In both the projected and ground-truth segmentations, HD calculates the greatest separation possible between any two spots;
- The model’s ability to accurately identify all the negative situations is measured by specificity (non-tumor pixels);
- Sensitivity gauges how well a model can detect all of the positive cases (tumor pixels);
- The segmentations of the anticipated and ground truth data are IoU, which is the Jaccard index;
- The overlap between the expected and ground-truth segmentations is measured using DSC.
5.3. BT Segmentation Prediction on the Trained Model
- 0: ‘NOT tumor’;
- 1: ‘NECROTIC/CORE’;
- 2: ‘EDEMA’;
- 3: ‘ENHANCING’.
5.4. BT Segmentation Prediction on the Trained Model
- 0: ‘NOT tumor’, 1: ‘NECROTIC/CORE’, 2: ‘EDEMA’, 3: ‘ENHANCING’.
- FN: The proportion of pixels that are wrongly identified as not falling within a particular category;
- TN: The quantity of pixels that have been appropriately identified as not falling under any one category;
- FP: The number of pixels that have been erroneously assigned to a certain class;
- TP: The proportion of pixels that are accurately categorized as being in a certain class, or TP.
5.5. BT Segmentation Prediction on the Trained Model
5.6. Comparative Analysis
6. Discussion
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Rizwan, M.; Shabbir, A.; Javed, A.R.; Shabbir, M.; Baker, T.; Obe, D.A.-J. Brain Tumor and Glioma Grade Classification Using Gaussian Convolutional Neural Network. IEEE Access 2022, 10, 29731–29740. [Google Scholar] [CrossRef]
- Sharma, S.; Gupta, S.; Gupta, D.; Juneja, A.; Khatter, H.; Malik, S.; Bitsue, Z.K. Deep Learning Model for Automatic Classification and Prediction of Brain Tumor. J. Sens. 2022, 2022, 3065656. [Google Scholar] [CrossRef]
- Khanmohammadi, S.; Mobarakabadi, M.; Mohebi, F. The Economic Burden of Malignant Brain Tumors. In Human Brain and Spinal Cord Tumors: From Bench to Bedside. Volume 1. Advances in Experimental Medicine and Biology; Rezaei, N., Hanaei, S., Eds.; Springer: Cham, Switzerland, 2023; Volume 1394. [Google Scholar] [CrossRef]
- Deepak, S.; Ameer, P.M. Brain Tumor Classification using Deep CNN Features via Transfer Learning. Comput. Biol. Med. 2019, 111, 103345. [Google Scholar] [CrossRef] [PubMed]
- Rasool, M.; Ismail, N.A.; Al-Dhaqm, A.; Yafooz, W.M.S.; Alsaeedi, A. A Novel Approach for Classifying Brain Tumours Combining a SqueezeNet Model with SVM and Fine-Tuning. Electronics 2022, 12, 149. [Google Scholar] [CrossRef]
- Petrosyan, E.; Fares, J.; Fernandez, L.G.; Yeeravalli, R.; Dmello, C.; Duffy, J.T.; Zhang, P.; Lee-Chang, C.; Miska, J.; Ahmed, A.U.; et al. Endoplasmic Reticulum Stress in the Brain Tumor Immune Microenvironment. Mol. Cancer Res. 2023, OF1–OF8. [Google Scholar] [CrossRef]
- Kokkalla, S.; Kakarla, J.; Venkateswarlu, I.B.; Singh, M. Three-class Brain Tumor Classification using Deep Dense Inception Residual Network. Soft Comput. 2021, 25, 8721–8729. [Google Scholar] [CrossRef]
- Polat, Ö.; Güngen, C. Classification of Brain Tumors from MR Images using Deep Transfer Learning. J. Supercomput. 2021, 77, 7236–7252. [Google Scholar] [CrossRef]
- Chieffo, D.P.R.; Lino, F.; Ferrarese, D.; Belella, D.; Della Pepa, G.M.; Doglietto, F. Brain Tumor at Diagnosis: From Cognition and Behavior to Quality of Life. Diagnostics 2023, 13, 541. [Google Scholar] [CrossRef]
- Norman, R.; Flaugher, T.; Chang, S.; Power, E. Self-Perception of Cognitive-Communication Functions After Mild Traumatic Brain Injury. Am. J. Speech Lang. Pathol. 2023, 32, 883–906. [Google Scholar] [CrossRef]
- Hauptmann, M.; Byrnes, G.; Cardis, E.; Bernier, M.O.; Blettner, M.; Dabin, J.; Engels, H.; Istad, T.S.; Johansen, C.; Kaijser, M.; et al. Brain Cancer after Radiation Exposure from CT Examinations of Children and Young Adults: Results from the EPI-CT Cohort Study. Lancet Oncol. 2023, 24, 45–53. [Google Scholar] [CrossRef]
- Kesav, N.; Jibukumar, M.G. Efficient and Low Complex Architecture for Detection and Classification of Brain Tumor using RCNN with Two Channel CNN. J. King Saud. Univ.-Comput. Inf. Sci. 2021, 34, 6229–6242. [Google Scholar] [CrossRef]
- Tummala, S.; Kadry, S.; Bukhari, S.A.C.; Rauf, H.T. Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling. Curr. Oncol. 2022, 29, 7498–7511. [Google Scholar] [CrossRef] [PubMed]
- Pareek, M.; Jha, C.K.; Mukherjee, S. Brain Tumor Classification from MRI Images and Calculation of Tumor Area. In Soft Computing: Theories and Applications. Advances in Intelligent Systems and Computing; Pant, M., Sharma, T., Verma, O., Singla, R., Sikander, A., Eds.; Springer: Singapore, 2020; Volume 1053. [Google Scholar] [CrossRef]
- Srikantamurthy, M.M.; Rallabandi, V.P.S.; Dudekula, D.B.; Natarajan, S.; Park, J. Classification of Benign and Malignant Subtypes of Breast Cancer Histopathology Imaging using Hybrid CNN-LSTM based Transfer Learning. BMC Med. Imaging 2023, 23, 19. [Google Scholar] [CrossRef] [PubMed]
- Ayadi, W.; Charfi, I.; Elhamzi, W.; Atri, M. Brain Tumor Classification based on Hybrid Approach. Vis. Comput. 2022, 38, 107–117. [Google Scholar] [CrossRef]
- Konar, D.; Bhattacharyya, S.; Panigrahi, B.K.; Behrman, E.C. Qutrit-Inspired Fully Self-Supervised Shallow Quantum Learning Network for Brain Tumor Segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6331–6345. [Google Scholar] [CrossRef] [PubMed]
- Khairandish, M.O.; Sharma, M.; Jain, V.; Chatterjee, J.M.; Jhanjhi, N.Z. A Hybrid CNN-SVM Threshold Segmentation Approach for Tumor Detection and Classification of MRI Brain Images. IRBM 2021, 43, 290–299. [Google Scholar] [CrossRef]
- Öksüz, C.; Urhan, O.; Güllü, M.K. Brain Tumor Classification using the Fused Features Extracted from Expanded Tumor Region. Biomed. Signal Process. Control 2022, 72, 103356. [Google Scholar] [CrossRef]
- Khan, M.A.; Ashraf, I.; Alhaisoni, M.; Damaševičius, R.; Scherer, R.; Rehman, A.; Bukhari, S.A.C. Multimodal Brain Tumor Classification Using Deep Learning and Robust Feature Selection: A Machine Learning Application for Radiologists. Diagnostics 2020, 10, 565. [Google Scholar] [CrossRef]
- Kadry, S.; Nam, Y.; Rauf, H.T.; Rajinikanth, V.; Lawal, I.A. Automated Detection of Brain Abnormality using Deep-Learning-Scheme: A Study. In Proceedings of the 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, 25–27 March 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Irmak, E. Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional Neural Network with Fully Optimized Framework. Iran. J. Sci. Technol. Trans. Electr. Eng. 2021, 45, 1015–1036. [Google Scholar] [CrossRef]
- Khafaga, D.S.; Alhussan, A.A.; El-Kenawy, E.-S.M.; Ibrahim, A.; Eid, M.M.; Abdelhamid, A.A. Solving Optimization Problems of Metamaterial and Double T-Shape Antennas Using Advanced Meta-Heuristics Algorithms. IEEE Access 2022, 10, 74449–74471. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lect. Notes Comput. Sci. 2015, 9351, 234–241. [Google Scholar] [CrossRef]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; Lecture Notes in Computer Science; Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W., Eds.; Springer: Cham, Switzerland, 2016; Volume 9901. [Google Scholar] [CrossRef]
- Nour, M.; Cömert, Z.; Polat, K. A Novel Medical Diagnosis Model for COVID-19 Infection Detection based on Deep Features and Bayesian Optimization. Appl. Soft Comput. 2020, 97, 106580. [Google Scholar] [CrossRef] [PubMed]
- Hossain, A.; Islam, M.T.; Abdul Rahim, S.K.; Rahman, M.A.; Rahman, T.; Arshad, H.; Khandakar, A.; Ayari, M.A.; Chowdhury, M.E.H. A Lightweight Deep Learning Based Microwave Brain Image Network Model for Brain Tumor Classification Using Reconstructed Microwave Brain (RMB) Images. Biosensors 2023, 13, 238. [Google Scholar] [CrossRef] [PubMed]
- Pattanaik, B.B.; Anitha, K.; Rathore, S.; Biswas, P.; Sethy, P.K.; Behera, S.K. Brain Tumor Magnetic Resonance Images Classification-based Machine Learning Paradigms. Contemp. Oncol. 2022, 26, 268–274. [Google Scholar] [CrossRef] [PubMed]
- Solanki, S.; Singh, U.P.; Chouhan, S.S.; Jain, S. Brain Tumor Detection and Classification using Intelligence Techniques: An Overview. IEEE Access 2023, 11, 12870–12886. [Google Scholar] [CrossRef]
- Shahin, A.I.; Aly, S.; Aly, W. A Novel Multi-class Brain Tumor Classification Method based on Unsupervised PCANet Features. Neural Comput. Appl. 2023. [Google Scholar] [CrossRef]
- Tharwat, A.; Gaber, T.; Ibrahim, A.; Hassanien, A.E. Linear Discriminant Analysis: A detailed Tutorial. AI Commun. 2017, 30, 169–190. [Google Scholar] [CrossRef]
- AlBadawy, E.A.; Saha, A.; Mazurowski, M.A. Deep Learning for Segmentation of Brain Tumors: Impact of Cross-Institutional Training and Testing. Med. Phys. 2018, 45, 1150–1158. [Google Scholar] [CrossRef]
- Chang, J.; Zhang, L.; Gu, N.; Zhang, X.; Ye, M.; Yin, R.; Meng, Q. A mix-pooling CNN Architecture with FCRF for Brain Tumor Segmentation. J. Vis. Commun. Image Represent. 2019, 58, 316–322. [Google Scholar] [CrossRef]
- Alrashedy, H.H.N.; Almansour, A.F.; Ibrahim, D.M.; Hammoudeh, M.A.A. BrainGAN: Brain MRI Image Generation and Classification Framework Using GAN Architectures and CNN Models. Sensors 2022, 22, 4297. [Google Scholar] [CrossRef]
- Gab Allah, A.M.; Sarhan, A.M.; Elshennawy, N.M. Classification of Brain MRI Tumor Images Based on Deep Learning PGGAN Augmentation. Diagnostics 2021, 11, 2343. [Google Scholar] [CrossRef] [PubMed]
- Ge, C.; Gu, I.Y.-H.; Jakola, A.S.; Yang, J. Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification. IEEE Access 2020, 8, 22560–22570. [Google Scholar] [CrossRef]
- Han, C.; Rundo, L.; Murao, K.; Noguchi, T.; Shimahara, Y.; Milacski, Z.; Koshino, S.; Sala, E.; Nakayama, H.; Satoh, S. MADGAN: Unsupervised Medical Anomaly Detection GAN using Multiple Adjacent Brain MRI Slice Reconstruction. BMC Bioinform. 2021, 22 (Suppl. S2), 31. [Google Scholar] [CrossRef] [PubMed]
- Dixit, A.; Nanda, A. An Improved Whale Optimization Algorithm-based Radial Neural Network for Multi-Grade Brain Tumor Classification. Vis. Comput. 2022, 38, 3525–3540. [Google Scholar] [CrossRef]
- Tandel, G.S.; Balestrieri, A.; Jujaray, T.; Khanna, N.N.; Saba, L.; Suri, J.S. Multiclass Magnetic Resonance Imaging Brain Tumor Classification using Artificial Intelligence Paradigm. Comput. Biol. Med. 2020, 122, 103804. [Google Scholar] [CrossRef] [PubMed]
- Brain Tumor Segmentation (BraTS2020). (n.d.). Available online: https://www.kaggle.com/datasets/awsaf49/brats2020-training-data (accessed on 11 March 2023).
- Sharif, M.I.; Li, J.P.; Amin, J.; Sharif, A. An improved framework for brain tumor analysis using MRI based on YOLOv2 and convolutional neural network. Complex Intell. Syst. 2021, 7, 2023–2036. [Google Scholar] [CrossRef]
- Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. Multi-modal brain tumor detection using deep neural network and multiclass SVM. Medicina 2022, 58, 1090. [Google Scholar] [CrossRef]
- Younis, A.; Qiang, L.; Nyatega, C.O.; Adamu, M.J.; Kawuwa, H.B. Brain tumor analysis using deep learning and VGG-16 ensembling learning approaches. Appl. Sci. 2022, 12, 7282. [Google Scholar] [CrossRef]
Ref | Dataset | Techniques Used | Result |
---|---|---|---|
[7] | 3100 images included in the dataset | Deep Dense Inception Residual Network, Three-class BT classification | Accuracy: 99.34% |
[8] | Figshare dataset | DL, Transfer Learning-based Classification, ResNet Mixed Convolution | F1 score Accuracy: 0.9435 Accuracy: 97.19% |
[12] | BT Classification, Kaggle and Figshare | CNN, RCNN, MRI | Accuracy: 97.98% |
[14] | Kaggle and Figshare dataset | ViT-based DNN | Accuracy: 97.98% |
[16] | Three distinct publicly available datasets used | Sort three forms of brain tumors, MRI | Accuracy: 90.34% |
[17] | Cancer Imaging Archive (TCIA) dataset | QFS-Net, MRI, Quantum Computing, Qutrit | Accuracy: 98.23% |
[39] | Multiclass datasets, REMBRANDT | ML, KNN, SVM, CNN, MRI | Accuracy: cross-validation protocols—88.15%, 97.45%, and 100% for K2, K5, K10 |
Hyperparameters | Properties |
---|---|
epochs | 5 |
optimizer | Adam |
loss | categorical cross-entropy |
Evaluation Metric | Performance Value |
---|---|
Accuracy | 0.9921 |
Mean Iou | 0.9123 |
Dice Coeff | 0.9012 |
Precision | 0.9923 |
Sensitivity | 0.9678 |
Specificity | 0.9988 |
Loss | 0.1599 |
Predicted | |||
---|---|---|---|
Positive | Negative | ||
Actual | Positive | True Positive (TP) | False Negative (FN) |
Negative | False Positive (FP) | True Negative (TN) |
Ref | Approach | Evaluation Metrics | Dataset |
---|---|---|---|
[41] | YOLO2 | Accuracy = 90% | BraTS dataset |
CNN | mean iou = 0.887 | ||
dice coeff = 0.874 | |||
precision = 0.915 | |||
sensitivity = 0.945 | |||
specificity = 0.957 | |||
[42] | DNN, SVM | Accuracy = 97.47% | BraTS dataset |
mean iou = 0.954 | |||
dice coeff = 0.934 | |||
precision = 0.923 | |||
sensitivity = 0.914 | |||
specificity = 0.934 | |||
[43] | CNN | Accuracy = 96% | BraTS dataset |
mean iou = 0.951 | |||
dice coeff = 0.962 | |||
precision = 0.941 | |||
sensitivity = 0.934 | |||
specificity = 0.952 | |||
Our Approach | CNN with uNet sampling | Accuracy = 98% | BraTS dataset |
mean iou = 0.91 | |||
dice coeff = 0.90 | |||
precision = 0.99 | |||
sensitivity = 0.96 | |||
specificity = 0.99 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mostafa, A.M.; Zakariah, M.; Aldakheel, E.A. Brain Tumor Segmentation Using Deep Learning on MRI Images. Diagnostics 2023, 13, 1562. https://doi.org/10.3390/diagnostics13091562
Mostafa AM, Zakariah M, Aldakheel EA. Brain Tumor Segmentation Using Deep Learning on MRI Images. Diagnostics. 2023; 13(9):1562. https://doi.org/10.3390/diagnostics13091562
Chicago/Turabian StyleMostafa, Almetwally M., Mohammed Zakariah, and Eman Abdullah Aldakheel. 2023. "Brain Tumor Segmentation Using Deep Learning on MRI Images" Diagnostics 13, no. 9: 1562. https://doi.org/10.3390/diagnostics13091562
APA StyleMostafa, A. M., Zakariah, M., & Aldakheel, E. A. (2023). Brain Tumor Segmentation Using Deep Learning on MRI Images. Diagnostics, 13(9), 1562. https://doi.org/10.3390/diagnostics13091562