Next Article in Journal
New Industry 4.0 Advances in Industrial IoT and Visual Computing for Manufacturing Processes: Volume II
Next Article in Special Issue
Automated Hybrid Model for Detecting Perineural Invasion in the Histology of Colorectal Cancer
Previous Article in Journal
Enhanced Search-and-Rescue Optimization-Enabled Secure Route Planning Scheme for Internet of Drones Environment
Previous Article in Special Issue
Semi-Supervised Medical Image Classification Based on Attention and Intrinsic Features of Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal and Efficient Deep Learning Model for Brain Tumor Magnetic Resonance Imaging Classification and Analysis

1
Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, AlKharj 16278, Saudi Arabia
2
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Department of Information Systems, College of Computing and Information System, Umm Al-Qura University, Mecca 24451, Saudi Arabia
4
Department of Computer Science, College of Science & Art at Mahayil, King Khalid University, Abha 62529, Saudi Arabia
5
Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 22254, Saudi Arabia
6
Department of Electrical Engineering and Computer Sciences, College of Engineering, University of California, Berkeley, CA 94720, USA
7
Department of Computer Science, Faculty of Computers and Information Technology, Future University in Egypt, New Cairo 11835, Egypt
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(15), 7953; https://doi.org/10.3390/app12157953
Submission received: 1 June 2022 / Revised: 23 July 2022 / Accepted: 5 August 2022 / Published: 8 August 2022
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)

Abstract

:
A brain tumor (BT) is an abnormal development of brain cells that causes damage to the nerves and blood vessels. An accurate and early diagnosis of BT is important to prevent future complications. Precise segmentation of the BT provides a basis for surgical and planning treatment to physicians. Manual detection utilizing MRI images is computationally difficult. Due to significant variation in their structure and location, viz., ambiguous boundaries and irregular shapes, computerized tumor diagnosis is still a challenging task. The application of a convolutional neural network (CNN) helps radiotherapists categorize the types of BT from magnetic resonance images (MRI). This study designs an evolutional algorithm with a deep learning-driven brain tumor MRI image classification (EADL-BTMIC) model. The presented EADL-BTMIC model aims to accurately recognize and categorize MRI images to identify BT. The EADL-BTMIC model primarily applies bilateral filtering (BF) based noise removal and skull stripping as a pre-processing stage. In addition, the morphological segmentation process is carried out to determine the affected regions in the image. Moreover, sooty tern optimization (STO) with the Xception model is exploited for feature extraction. Furthermore, the attention-based long short-term memory (ALSTM) technique is exploited for the classification of BT into distinct classes. To portray the increased performance of the EADL-BTMIC model, a series of simulations were carried out on the benchmark dataset. The experimental outcomes highlighted the enhancements of the EADL-BTMIC model over recent models.

1. Introduction

Cancer is a serious health issue around the world. It is the second leading cause of death, following cardiovascular diseases [1]. Among the various kinds of cancer, brain tumors (BTs) are a life-threatening type due to their heterogeneous features, aggressive nature, and small survival rate. BTs are classified into distinct types based on their texture, location, and shape [2]. Based on the type of tumor, physicians can predict and diagnose patient endurance and make decisions regarding the right treatment, which could range from surgery, succeeded by chemotherapy and then radiotherapy, to playing a waiting game method that ignores invading processes. Therefore, tumor grading is a significant factor in treatment monitoring and planning [3,4].
Magnetic Resonance Imaging (MRI) is a non-invasive, pain-free medical imaging process that utilizes high-quality images of human body organs in 3D and 2D formats [5]. It is extensively utilized because it is the most precise method for categorizing and identifying cancer, owing to its high-resolution images of the brain tissues [6,7]. However, identifying the cancer variety with MRI images is difficult and prone to error. In particular, the accuracy is based on the experience of the radiologist, and it is significant to note that it is a time-consuming process [8]. Additionally, accurate analysis aids the patient in initiating the right treatment promptly and living a longer life [9]. This creates a high demand in the Artificial Intelligence (AI) domain for developing and designing a novel and creative Computer Assisted Diagnosis (CAD) process that focuses on easing the work pressure of the analysis and categorization of tumors and functions as a useful tool for radiologists and doctors [10].
CAD methods are useful for neuro-oncologists in various aspects. CAD methods pave the way in the classification and early detection of BTs [11]. Doctors, with the help of CAD, can perform highly precise categorizations compared to those dependent on visual comparisons. MRIs contain useful data about the position, shape, type, and size of BTs and do not expose the patient to dangerous ionizing radiation. MRIs offer high contrast of soft tissues compared to computerized tomography (CT) scans. Therefore, joined with the CAD model, MRIs may help rapidly identify tumors’ size and location. Developments in computers have designed strong tools which can help to attain more precise diagnoses. These developments in deep learning-related systems have led to a massive enhancement in medical image analysis and decision making. Particularly deep neural network (DNN)-related technologies utilized by well-trained experts [12].
Due to the rapid advancement of deep learning (DL) methods and their capability to classify medical images in a better way, CAD is a widely used methodology of diagnosis among medical imaging experts. Expanding research in applying DL to the categorization of multiple diseases within the limitations of existing technologies is presently a leading focus of radiology research scholars. Of the multitude of deep machine learning (ML) methods, CNNs are widely utilized for the medical image examination of distinct diseases and, thus, extensively used by researchers.
This study designs an evolutional algorithm with a deep learning-driven brain tumor MRI image classification (EADL-BTMIC) model. The presented EADL-BTMIC model applies bilateral filtering (BF) based noise removal and skull stripping as a pre-processing stage. Additionally, the morphological segmentation process is carried out to determine the affected regions in the image. In addition, sooty tern optimization (STO) with the Xception model is exploited for feature extraction purposes. Finally, the attention-based long short-term memory (ALSTM) technique is exploited for classifying BT into distinct classes. A detailed experimental analysis is carried out to examine the performance of the EADL-BTMIC model. In short, this paper’s contribution is summarized as follows:
  • An intelligent EADL-BTMIC model comprising pre-processing, morphological segmentation, Xception-based feature extraction, STO parameter tuning, and ALSTM classification using MRI images is presented. To the best of our knowledge, the EADL-BTMIC model has never been presented in the literature.
  • A novel STO algorithm with the Xception model is applied for the hyperparameter tuning process, which helps in boosting the overall BT classification performance.

2. Related Works

The authors in [13] developed a BT classification method based on the hybrid brain tumor classification (HBTC) model. The presented model reduces the intrinsic difficulties and enhances the classification performance. Additionally, many ML models such as multilayer perception (MLP), J48, meta bagging (MB), and random tree (RT) are used for the classification of cyst, glioma, menin, and meta tumors. The authors in [14] presented a multi-level attention model to classify BT. The presented multi-level attention network (MANet) comprises spatial and cross-channel attention that concentrates on tumor region prioritization and manages cross-channel temporal dependency existing in the semantic feature sequence attained from the Xception backbone. Nayak et al. [15] presented a CNN-based dense EfficientNet using min–max normalization for recognizing 3260 T1-weighted contrast-enhanced brain MRI images into four categories (glioma, meningioma, pituitary, and no tumor). The developed network is a version of EfficientNet with dense and drop-out layers appended. In addition, data augmentation with min–max normalization is integrated to increase the contrast of tumor cells.
Abd El Kader et al. [16] developed a differential deep convolutional neural network (differential deep-CNN) architecture to categorize a variety of BTs, involving normal and abnormal MR. A further differential feature map from the original CNN feature map can be derived by utilizing a differential operator in the deep-CNN model. The derivation procedure results in increasing the performance of the presented technique based on the result of the evaluation parameter. Masood et al. [17] introduced a custom Mask Region-based CNN (Mask RCNN) with a densenet-41 backbone structural design viz., trained through transfer learning (TL) for accurate segmentation and classifying of brain cancers.
In [18], the authors proposed a Fully Automated Heterogeneous Segmentation using SVM (FAHS-SVM) for brain cancer segmentation dependent upon DL technologies. The study presents the cerebral venous system separation into MRI images by adding a novel, fully automated technique dependent upon morphological, relaxometry, and structural details. The segmentation function can discriminate with a higher level of uniformity among anatomy and the neighboring brain tissues. ELM is a kind of learning model that comprises more than one hidden node layer. Mohsen et al. [19] combined a deep neural network with the discrete wavelet transform (DWT). The useful feature extracting mechanism and principal components analysis (PCA) and performance assessment were very effective on the performance measure.
Gab Allah et al. [20] explored the efficiency of a new method to classify brain cancer MRI using VGG19 feature extracting combined with one of three kinds of the classifier. A progressive, growing generative adversarial network (PGGAN) augmentation module was utilized for producing a ‘realistic’ MRI of BT and assisted in resolving the shortcomings of the image required for DL. In Bodapati et al. [21], a 2-channel DNN structure is presented for tumor classifier viz. further generalizable. At first, local feature representation is extracted from the convolutional block of Xception and InceptionResNetV2 networks and is vectorized by the presented pooling-based models. An attention model is presented, which focuses more on the tumor region and less on the non-tumor region, ultimately assisting in distinguishing the kind of tumor from the images. Rehman et al. [16] introduce CNNs (VGGNet, AlexNet, and GoogLeNet) for categorizing BTs, namely pituitary, glioma, and meningioma. Then, the study examines the transfer learning technique that freezes and fine-tunes using MRI slices of BT data.

3. The Proposed Model

In this study, a novel EADL-BTMIC model was established to recognize and categorize the MRI images to identify BTs accurately. The EADL-BTMIC model primarily applies BF and the skull stripping process as a pre-processing stage. Additionally, the morphological segmentation process is carried out to determine the affected regions in the image. The STO with the Xception model is exploited for feature extraction purposes. Furthermore, the ALSTM model is exploited to classify BT into distinct classes. Figure 1 showcases the overall working process of the EADL-BTMIC technique.

3.1. Image Pre-Processing

At the introductory level, the EADL-BTMIC model primarily applies BF and the skull stripping process as a pre-processing stage. The BF technique has the benefits of less noise, being easy to design, automated censoring, and rotation symmetry. The input image might have noises, including Gaussian, salt pepper noise, and so on. Noise removal applications preserve the information in the input dataset in a similar manner. The BF technique is applied for de-noising the input image. This can be attained by combining two Gaussian filters, i.e., the intensity domain, the one that is operating, and the spatial domain, another one that operates. For weight, the spatial and intensity distance are applied in the algorithm. The output at p pixel position is defined by using Equation (1):
F ¯ ( p ) = 1 N z ϵ S ( p ) e q p 2 2 ε e 2 | F ( q ) F ( p ) | 2 2 ε S 2 F ( q )
From the above equation, the normalization constant can be represented as N , S ( p ) represent a pixel spatial neighborhood F ( p ) , and parameters ε e   ε r are governing weight from the domain of intensity and spatial start to fall off.
e q p 2 2 ε e 2 e   | F ( q ) F ( p ) | 2 2 ε e 2
The BF has been applied in volumetric de-noising, texture elimination, tone mapping, and other applications, such as de-noising the image. We can generate simple conditions for down-sampling the procedures and achieve acceleration by expressing in the augmented space; with the two simple non-linearity, the BF was performed as linear convolution.

3.2. Image Segmentation

Next to image pre-processing, the morphological segmentation process is carried out to determine the affected regions in the image [22]. Pixel values greater than the specified threshold are marked as white, whereas the remaining regions are mapped as black; it allows different regions to be created around the disease. Next, an erosion process of morphology is utilized for extracting white pixels. In this work, the wavelet transformation technique was utilized to generate data, features, and operators into frequency components that allow each component to be studied discretely. The wavelet transformation function has been utilized for the effective segmentation of the brain MRI. Each wavelet is generated from a simple wavelet function using the scaling and translation process. The wavelet function is specified over a restricted time interval within the average value of null.

3.3. Feature Extraction

In this work, the STO with the Xception model is exploited for feature extraction purposes. The DL technology is a familiar application developed from the ML approach with increasing multilayer FFNN. In the application of constrained hardware properties, multiple layers in traditional NN are constrained by learned parameters, and the relationship between the layers needs maximal computational time. With the formation of an advanced end system, it could be possible to train deep methodologies through multi-phase NN. The DL technology is developed from CNNs using a higher rate of function in huge applications such as speech examination, object prediction, image processing, and ML techniques. Additionally, CNN is a multilayer NN [23]. Moreover, CNN benefits from FE limiting the pre-processing step to a greater amount. Hence, it is not necessary to conduct a study to identify the features of an image. The CNN is composed of classification, Input, Relu, Pooling, Dropout, Convolution, and Fully Connected (FC) layers. In our study, the DL-based Xception module is used to extract the features from the facial image.
The Xception model is similar to the inception module, where the inceptions are substituted with depthwise separate Conv. layer. Especially, Xception architecture is created based on the linear stack of a depthwise separate convolutional layer using a linear residual attached. In this method, pointwise and depthwise layers have been employed. In the depthwise layer, a spatial convolution layer manually occurs in the pointwise layer and channel of the input dataset, where a 1 × 1 convolution map the outcomes of new channel space in the applications of the depthwise convolution layer.
Here, the STO algorithm is utilized to fine-tune the hyperparameters involved in the Xception model [24]. In this study, the STO algorithm is preferred over other optimization algorithms for the following reasons. The STO algorithm is capable of exploration, exploitation, and local optima avoidance. It can solve challenging constrained problems and is very competitive compared with other optimization algorithms. The STO technique was simulated by the attack performance of sooty tern birds. Usually, the sooty terns live from the groups. It can utilize its intelligence for locating and attacking the target. The most important features of sooty terns are migration and assault behaviors. The subsequent offer insights as to sooty tern birds:
  • The sooty tern moved from the group in migration. To avoid a collision, the primary places of sooty terns were distinct.
  • In the groups, a sooty tern with a minimal fitness level nevertheless traveled a similar distance to the fittest amongst them.
  • The sooty terns with minimal fitness are updated in their initial places on the basis of the fittest sooty terns.
The sooty tern must meet three requirements for the migration: SA was utilized for computing a novel searching agent place for avoiding collisions with their neighborhood searching agents (for instance, sooty terns).
C s t = s A P ( z )
In which C s t signifies the place of sooty terns which do not collide with other terns. P s t implies the existing place of sooty terns. z refers to the existing iteration, and s A indicates the migration of sooty terns from the solution space. Next, the collision evasion, the searching agent converges from the path of the finest neighbor.
M s t = C B P s t ( z ) ( P b s t ( z ) P s t ( z ) )
whereas M S t signifies the special place of searching agents (that is, sooty tern). P b s t ( z ) illustrates the optimum place for searching agents, and C B represents the arbitrary variable and is calculated as:
C B = 0.5 R a n d
In which R a n d denotes some arbitrary number from the range of z e r o and one. Upgrading equivalent to the optimum searching agent. Finally, the sooty terns can revise their place in relation to the optimum searching agents.
D s t = C s t + M s t
In which D s t denotes the variance amongst the searching agents and an optimum fittest searching agents. The sooty tern alters its speed and attack angle under the migration. It can obtain altitude by flapping its wings. It produced spherical performance from the air but attacks prey that was described under
x = R a d i u s   sin   ( i )
y = R a d i u s   cos   ( i )
z = R a d i u s i  
r = u e k v
In which, R a d i u s stands for the radius of all the spiral turns, i denotes the value from the range of [ 0 k 2 τ c ] , and the u and v define the constant value.

3.4. Image Classification

In the final stage, the ALSTM model is exploited for the classification of BT into distinct classes. The RNN is a class of NNs where the output of feed-forward traditional ANN is provided as novel input to neurons dependent upon novel input value. The resultant value at some neurons ( z + 1 ) is dependent upon their input at moment z . This improves the dynamism of the network method. Considering there is a connection between two input values, this method was determined as a memory network method [4,7,25,26]. In RNN, input data were considered connected to everyone. The LSTM is also the most famous RNN network method, whereas the structure was established to vanish gradient problems. At this point, w z implies the input value at time z , and 0 z signifies the resultant value at time z . Figure 2 depicts the framework of the LSTM technique.
The structure of the LSTM network node contains 3 fundamental gates, namely the forget gate f z , input gate i z , and output gate 0 z . However, the input, as well as output gates, signify the data entered and data exit the node at time z correspondingly. The forgetting gate chooses the data to be forgotten related to the preceding status data ( h z 1 ) and existing input ( x z ) . These 3 gates choose that upgrade the present memory cell c z and present latency h z value. During the LSTM node, the connections amongst the gates were computed mathematically utilizing the following formula:
i z = σ ( w i .   [ h z 1 ,   x z ] + b i )
f z = σ ( w f .   [ h z 1 ,   x z ] + b f )
o z = σ ( w o .   [ h z 1 ,   x z ] + b o )  
c z = t a n h ( w c .   [ h z 1 ,   x z ] + b c )
c z = f z c z 1 + i z c z
h z = 0 z t a n h ( C z )
In the LSTM network structure procedures, the representation vectors take as an input in the primary data to the final data. Consider H d × N represent a matrix consisting of a hidden vector [ h 1 ,   ,   h N ] that the LSTM produced, where the size of the hidden layer is represented as d and the length of the given data is denoted by N . Moreover,   e N N represents a vector of ls and v a embedding vector. The attention model produces an attention a weighted hidden representation r and weight vector α .
M = t a n h ( [ W h H W v v a e N ] )
α = s o f t m a x ( w T M )
r = H α T  
In the equation, the projection parameters are characterized by M ( d + d a ) × N ,   α N ,   r d .   W h d × d ,   W v d a × d a and w d + d a . r signifies a weighted representation of data with a given aspect, and a vector consisting of attention weight is indicated as α . The attention model enables the model to capture the essential part of data while considering different aspects.

4. Performance Validation

In this study, the performance validation of the EADL-BTMIC model is carried out using the Figshare dataset [24]. The details related to the dataset are shown in Table 1. A few sample images are illustrated in Figure 3. The dataset holds 3064 T1-weighted contrast-enhanced images with 3 kinds of BT. The dataset includes 708 images in the Meningioma class, 930 images in the Pituitary class, and 1426 images in the Glioma class. In this study, the experimental validation occurs in two distinct ways by splitting the dataset into two aspects based on the size of the training and testing data: 80% of training with 20% of testing data and 70% of training with 30% of testing data. The proposed model is simulated using Python 3.6.5 tool. The parameter settings are given as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU.
Figure 4 demonstrates the confusion matrices formed by the EADL-BTMIC model on test data. With 80% of TR data, the EADL-BTMIC model identified 553, 7144, and 1121 samples into MEN, PIT, and GLI classes, respectively. In addition, with 20% of TS data, the EADL-BTMIC methodology identified 127, 189, and 288 samples into MEN, PIT, and GLI classes, correspondingly. Additionally, with 70% of TR data, the EADL-BTMIC system identified 493, 644, and 977 samples into MEN, PIT, and GLI classes, correspondingly. Finally, with 30% of TS data, the EADL-BTMIC technique identified 210, 268, and 429 samples into MEN, PIT, and GLI classes, respectively.
Table 2 and Figure 5 depict the overall classifier results of the EADL-BTMIC model on 80% of training (TR) data and 20% of testing (TS) data. The experimental values exposed that the EADL-BTMIC technique exhibited effectual outcomes. For instance, with 80% of TR data, the EADL-BTMIC model obtained increased a c c u y , p r e c n , r e c a l , s p e c y , and F s c o r e of 98.29%, 97.37%, 97.04%, 98.64%, and 97.20%, respectively. At the same time, with 20% of TS data, the EADL-BTMIC methodology reached enhanced a c c u y , p r e c n , r e c a l , s p e c y , and F s c o r e of 99.02%, 98.22%, 98.54%, 99.27%, and 98.37%, correspondingly.
Table 3 and Figure 6 display the overall classifier results of the EADL-BTMIC technique on 70% of TR data and 30% of TS data. The experimental values revealed that the EADL-BTMIC technique illustrated effectual outcomes.
For sample, with 70% of TR data, the EADL-BTMIC technique attained enhanced a c c u y , p r e c n , r e c a l , s p e c y , and F s c o r e of 99.07%, 98.45%, 98.67%, 99.29%, and 98.55%, correspondingly. Simultaneously, with 30% of TS data, the EADL-BTMIC approach reached increased a c c u y , p r e c n , r e c a l , s p e c y , and F s c o r e of 99.06%, 98.34%, 98.61%, 99.30%, and 98.46%, correspondingly.
Figure 7 offers the accuracy and loss graph analysis of the EADL-BTMIC approach on a distinct set of TR/TS datasets. The outcomes demonstrated that the accuracy value is enhanced, and the loss value tends to reduce with a higher epoch count. It is also observed that the training loss is minimal, and validation accuracy is superior on distinct sets of TR/TS datasets.
A brief precision-recall examination of the EADL-BTMIC model on the test dataset is portrayed in Figure 8. By observing the figure, it is noticed that the EADL-BTMIC model has accomplished maximum precision-recall performance under all classes.
A detailed ROC investigation of the EADL-BTMIC model on the test dataset is portrayed in Figure 9. The results indicated that the EADL-BTMIC model exhibited its ability to categorize three different classes, such as meningioma, pituitary, and glioma, on the test dataset.
Table 4 provides a comprehensive comparison study of the EADL-BTMIC model with other models [15,17]. Figure 10 demonstrates a brief a c c u y and s p e c y assessment of the EADL-BTMIC approach with existing models. The figure indicates that the AlexNet-FC7 method has shown lower performance with minimal a c c u y and s p e c y of 91.21% and 90.36%, respectively. Followed by the Alexnet-FC6, VGG19-CNN, VGG19-GRU, and VGG19-Bi-GRU models, which accomplished moderately closer values of a c c u y and s p e c y . However, the EADL-BTMIC model has gained maximum a c c u y and s p e c y of 99.06% and 99.30%, respectively.
Figure 11 illustrates a detailed p r e c n and r e c a l analysis of the EADL-BTMIC system with recent methods. The figure shows that the AlexNet-FC7 algorithm has shown lower performance with minimal p r e c n and r e c a l of 90.48% and 95.48%, correspondingly. Afterward, the Alexnet-FC6, VGG19-CNN, VGG19-GRU, and VGG19-Bi-GRU techniques accomplished moderately closer values of p r e c n and r e c a l . Finally, the EADL-BTMIC technique gained maximal p r e c n and r e c a l of 98.34% and 98.61%, correspondingly.
The above-mentioned results and discussion ensured that the EADL-BTMIC model resulted in enhanced classification outcomes over other methods.

5. Conclusions

In this study, a new EADL-BTMIC model has been developed to accurately recognize and categorize MRI images for the identification of BT. The EADL-BTMIC model primarily applied BF and the skull stripping process as a pre-processing stage. Additionally, the morphological segmentation process is carried out to determine the affected regions in the image, followed by the STO with Xception model, exploited for feature extraction purposes. Furthermore, the ALSTM model is exploited to classify BTs into distinct classes. To portray the improved performance of the EADL-BTMIC model, a series of simulations were carried out on a benchmark dataset. The experimental outcomes highlighted the enhancements of the EADL-BTMIC model over recent models. In the future, deep instance segmentation models will be applied to improve the BT classification performance of the presented model.

Author Contributions

Conceptualization, M.A.H. and S.B.H.H.; methodology, H.A.M.; software, M.O.; validation, S.S.A., R.M. and A.Y.; formal analysis, F.A.; investigation, R.M.; resources, F.A.; data curation, M.O.; writing—original draft preparation, H.A.M. and S.S.A.; writing—review and editing, R.M. and S.B.H.H.; visualization, M.A.H.; supervision, S.S.A.; project administration, M.A.H.; funding acquisition, H.A.M. and S.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through the Large Groups Project under grant number (25/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R114), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4210118DSR29).

Institutional Review Board Statement

This article does not contain any studies with human participants performed by any authors.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tandel, G.S.; Biswas, M.; Kakde, O.G.; Tiwari, A.; Suri, H.S.; Turk, M.; Laird, J.R.; Asare, C.K.; Ankrah, A.A.; Khanna, N.N.; et al. A review on a deep learning perspective in brain cancer classification. Cancers 2019, 11, 111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumor classification for MR images using transfer learning and fine-tuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [Google Scholar] [CrossRef] [PubMed]
  3. Khan, H.A.; Jue, W.; Mushtaq, M.; Mushtaq, M.U. Brain tumor classification in MRI image using convolutional neural network. Math. Biosci. Eng. 2020, 17, 6203–6216. [Google Scholar] [CrossRef] [PubMed]
  4. Qureshi, S.A.; Raza, S.E.A.; Hussain, L.; Malibari, A.A.; Nour, M.K.; Rehman, A.U.; Al-Wesabi, F.N.; Hilal, A.M. Intelligent ultra-light deep learning model for multi-class brain tumor detection. Appl. Sci. 2022, 12, 3715. [Google Scholar] [CrossRef]
  5. Sarhan, A.M. Brain tumor classification in magnetic resonance images using deep learning and wavelet transform. J. Biomed. Sci. Eng. 2020, 13, 102. [Google Scholar] [CrossRef]
  6. Al Duhayyim, M.; Alshahrani, H.M.; Al-Wesabi, F.N.; Al-Hagery, M.A.; Hilal, A.M.; Zaman, A.S. Intelligent machine learning based EEG signal classification model. Comput. Mater. Contin. 2022, 71, 1821–1835. [Google Scholar]
  7. Poonia, R.C.; Gupta, M.K.; Abunadi, I.; Albraikan, A.A.; Al-Wesabi, F.N.; Hamza, M.A.B.T. Intelligent diagnostic prediction and classification models for detection of kidney disease. Healthcare 2022, 10, 371. [Google Scholar] [CrossRef]
  8. Areej, A.M.; Fahd NAl-Wesabi Marwa, O.; Mimouna, A.A.; Manar, A.H.; Abdelwahed, M.; Ishfaq, Y.; Abu Sarwar, Z. Arithmetic optimization with retinanet model for motor imagery classification on brain computer interface. J. Healthc. Eng. 2022, 2022, 3987494. [Google Scholar]
  9. Mustafa Hilal, A.; Issaoui, I.; Obayya, M.; Al-Wesabi, F.N.; Nemri, N.; Hamza, M.A.; Al Duhayyim, M.; Zamani, A.S. Modeling of explainable artificial intelligence for biomedical mental disorder diagnosis. Comput. Mater. Contin. 2022, 71, 3853–3867. [Google Scholar] [CrossRef]
  10. Nazir, M.; Shakil, S.; Khurshid, K. Role of deep learning in brain tumor detection and classification (2015 to 2020): A review. Comput. Med. Imaging Graph. 2021, 91, 101940. [Google Scholar] [CrossRef]
  11. Polat, Ö.; Güngen, C. Classification of brain tumors from MR images using deep transfer learning. J. Supercomput. 2021, 77, 7236–7252. [Google Scholar] [CrossRef]
  12. Ramesh, S.; Sasikala, S.; Paramanandham, N. Segmentation and classification of brain tumors using modified median noise filter and deep learning approaches. Multimed. Tools Appl. 2021, 80, 11789–11813. [Google Scholar] [CrossRef]
  13. Nawaz, S.A.; Khan, D.M.; Qadri, S. Brain Tumor Classification Based on Hybrid Optimized Multi-features Analysis Using Magnetic Resonance Imaging Dataset. Appl. Artif. Intell. 2022, 36, 2031824. [Google Scholar] [CrossRef]
  14. Shaik, N.S.; Cherukuri, T.K. Multi-level attention network: Application to brain tumor classification. Signal Image Video Process. 2022, 16, 817–824. [Google Scholar] [CrossRef]
  15. Abd El Kader, I.; Xu, G.; Shuai, Z.; Saminu, S.; Javaid, I.; Salim Ahmad, I. Differential deep convolutional neural network model for brain tumor classification. Brain Sci. 2021, 11, 352. [Google Scholar] [CrossRef]
  16. Masood, M.; Nazir, T.; Nawaz, M.; Mehmood, A.; Rashid, J.; Kwon, H.Y.; Mahmood, T.; Hussain, A. A novel deep learning method for recognition and classification of brain tumors from MRI images. Diagnostics 2021, 11, 744. [Google Scholar] [CrossRef]
  17. Jia, Z.; Chen, D. Brain Tumor Identification and Classification of MRI images using deep learning techniques. IEEE Access 2020. [Google Scholar] [CrossRef]
  18. Mohsen, H.; El-Dahshan, E.S.A.; El-Horbaty, E.S.M.; Salem, A.B.M. Classification using deep learning neural networks for brain tumors. Future Comput. Inform. J. 2018, 3, 68–71. [Google Scholar] [CrossRef]
  19. Gab Allah, A.M.; Sarhan, A.M.; Elshennawy, N.M. Classification of Brain MRI Tumor Images Based on Deep Learning PGGAN Augmentation. Diagnostics 2021, 11, 2343. [Google Scholar] [CrossRef]
  20. Bodapati, J.D.; Shaik, N.S.; Naralasetti, V.; Mundukur, N.B. Joint training of two-channel deep neural network for brain tumor classification. Signal Image Video Process. 2021, 15, 753–760. [Google Scholar] [CrossRef]
  21. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
  22. Sharma, S.; Kumar, S. The Xception model: A potential feature extractor in breast cancer histology images classification. ICT Express 2022, 8, 101–108. [Google Scholar] [CrossRef]
  23. Singh, A.; Sharma, A.; Rajput, S.; Mondal, A.K.; Bose, A.; Ram, M. Parameter Extraction of Solar Module Using the Sooty Tern Optimization Algorithm. Electronics 2022, 11, 564. [Google Scholar] [CrossRef]
  24. Salur, M.U.; Aydin, I. A novel hybrid deep learning model for sentiment classification. IEEE Access 2020, 8, 58080–58093. [Google Scholar] [CrossRef]
  25. Mengash, H.A.; Mahmoud, H.A.H. Brain cancer tumor classification from motion-corrected mri images using convolutional neural network. Comput. Mater. Contin. 2021, 68, 1551–1563. [Google Scholar]
  26. Cheng, J. Brain Tumor Dataset. Figshare Dataset. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/1512427/5 (accessed on 31 May 2022).
Figure 1. Overall process of EADL-BTMIC technique.
Figure 1. Overall process of EADL-BTMIC technique.
Applsci 12 07953 g001
Figure 2. Architecture of LSTM.
Figure 2. Architecture of LSTM.
Applsci 12 07953 g002
Figure 3. Sample Images.
Figure 3. Sample Images.
Applsci 12 07953 g003
Figure 4. Confusion matrices of EADL-BTMIC technique (a) 80% of TR data, (b) 20% of TS data, (c) 70% of TR data, and (d) 30% of TS data.
Figure 4. Confusion matrices of EADL-BTMIC technique (a) 80% of TR data, (b) 20% of TS data, (c) 70% of TR data, and (d) 30% of TS data.
Applsci 12 07953 g004
Figure 5. Result analysis of EADL-BTMIC technique on 80% of TR and 20% of TS data.
Figure 5. Result analysis of EADL-BTMIC technique on 80% of TR and 20% of TS data.
Applsci 12 07953 g005
Figure 6. Result analysis of EADL-BTMIC technique on 70% of TR and 30% of TS data.
Figure 6. Result analysis of EADL-BTMIC technique on 70% of TR and 30% of TS data.
Applsci 12 07953 g006
Figure 7. Accuracy and loss analysis of EADL-BTMIC technique (a) 80:20 of TR/TS accuracy, (b) 80:20 of TR/TS loss, (c) 70:30 of TR/TS accuracy, and (d) 70:30 of TR/TS loss.
Figure 7. Accuracy and loss analysis of EADL-BTMIC technique (a) 80:20 of TR/TS accuracy, (b) 80:20 of TR/TS loss, (c) 70:30 of TR/TS accuracy, and (d) 70:30 of TR/TS loss.
Applsci 12 07953 g007
Figure 8. Precision-recall curve analysis of EADL-BTMIC technique.
Figure 8. Precision-recall curve analysis of EADL-BTMIC technique.
Applsci 12 07953 g008
Figure 9. ROC curve analysis of EADL-BTMIC technique.
Figure 9. ROC curve analysis of EADL-BTMIC technique.
Applsci 12 07953 g009
Figure 10. A c c u y and s p e c y analysis of EADL-BTMIC technique with recent algorithms.
Figure 10. A c c u y and s p e c y analysis of EADL-BTMIC technique with recent algorithms.
Applsci 12 07953 g010
Figure 11. P r e c n and r e c a l analysis of EADL-BTMIC technique with recent algorithms.
Figure 11. P r e c n and r e c a l analysis of EADL-BTMIC technique with recent algorithms.
Applsci 12 07953 g011
Table 1. Dataset details.
Table 1. Dataset details.
ClassesNo. of Images
Meningioma (MEN)708
Pituitary (PET)930
Glioma (GLI)1426
Total3064
Table 2. Result analysis of EADL-BTMIC technique with various measures on 80% of TR and 20% of TS data.
Table 2. Result analysis of EADL-BTMIC technique with various measures on 80% of TR and 20% of TS data.
LabelsAccuracyPrecisionRecallSpecificityF-Score
Training Phase (80%)
Meningioma98.2597.1995.3499.1496.26
Pituitary98.2997.2897.0198.8397.14
Glioma98.3397.6598.7797.9598.20
Average98.2997.3797.0498.6497.20
Testing Phase (20%)
Meningioma99.0296.2199.2298.9797.69
Pituitary99.0299.4797.4299.7698.44
Glioma99.0298.9798.9799.0798.97
Average99.0298.2298.5499.2798.37
Table 3. Result analysis of EADL-BTMIC technique with various measures on 70% of TR and 30% of TS data.
Table 3. Result analysis of EADL-BTMIC technique with various measures on 70% of TR and 30% of TS data.
LabelsAccuracyPrecisionRecallSpecificityF-Score
Training Phase (70%)
Meningioma99.2197.4399.2099.2198.31
Pituitary99.1698.9298.3299.5398.62
Glioma98.8398.9998.4999.1398.74
Average99.0798.4598.6799.2998.55
Testing Phase (30%)
Meningioma99.0296.3399.5398.8797.90
Pituitary99.1399.6397.4599.8498.53
Glioma99.0299.0898.8599.1898.96
Average99.0698.3498.6199.3098.46
Table 4. Comparative analysis of EADL-BTMIC approach with recent algorithms.
Table 4. Comparative analysis of EADL-BTMIC approach with recent algorithms.
MethodsAccuracyPrecisionRecallSpecificity
AlexNet-FC695.4494.9697.5395.30
AlexNet-FC791.2190.4895.4890.36
VGG19-CNN95.0694.6894.5495.48
VGG19-GRU96.6194.1294.6994.91
VGG19-Bi-GRU94.1495.7796.1095.14
EADL-BTMIC99.0698.3498.6199.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ahmed Hamza, M.; Abdullah Mengash, H.; Alotaibi, S.S.; Hassine, S.B.H.; Yafoz, A.; Althukair, F.; Othman, M.; Marzouk, R. Optimal and Efficient Deep Learning Model for Brain Tumor Magnetic Resonance Imaging Classification and Analysis. Appl. Sci. 2022, 12, 7953. https://doi.org/10.3390/app12157953

AMA Style

Ahmed Hamza M, Abdullah Mengash H, Alotaibi SS, Hassine SBH, Yafoz A, Althukair F, Othman M, Marzouk R. Optimal and Efficient Deep Learning Model for Brain Tumor Magnetic Resonance Imaging Classification and Analysis. Applied Sciences. 2022; 12(15):7953. https://doi.org/10.3390/app12157953

Chicago/Turabian Style

Ahmed Hamza, Manar, Hanan Abdullah Mengash, Saud S. Alotaibi, Siwar Ben Haj Hassine, Ayman Yafoz, Fahd Althukair, Mahmoud Othman, and Radwa Marzouk. 2022. "Optimal and Efficient Deep Learning Model for Brain Tumor Magnetic Resonance Imaging Classification and Analysis" Applied Sciences 12, no. 15: 7953. https://doi.org/10.3390/app12157953

APA Style

Ahmed Hamza, M., Abdullah Mengash, H., Alotaibi, S. S., Hassine, S. B. H., Yafoz, A., Althukair, F., Othman, M., & Marzouk, R. (2022). Optimal and Efficient Deep Learning Model for Brain Tumor Magnetic Resonance Imaging Classification and Analysis. Applied Sciences, 12(15), 7953. https://doi.org/10.3390/app12157953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop