Next Article in Journal
Dual-Energy CT for Accurate Discrimination of Intraperitoneal Hematoma and Intestinal Structures
Next Article in Special Issue
A Deep Learning Algorithm for Radiographic Measurements of the Hip in Adults—A Reliability and Agreement Study
Previous Article in Journal
Factors Influencing the Prognosis of Patients with Myalgic Encephalomyelitis/Chronic Fatigue Syndrome
Previous Article in Special Issue
Theory and Practice of Integrating Machine Learning and Conventional Statistics in Medical Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model

by
Nagwan Abdel Samee
1,
Noha F. Mahmoud
2,*,
Ghada Atteia
1,*,
Hanaa A. Abdallah
1,
Maali Alabdulhafith
1,
Mehdhar S. A. M. Al-Gaashani
3,
Shahab Ahmad
4 and
Mohammed Saleh Ali Muthanna
5
1
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
4
School of Economics & Management, Chongqing University of Post and Telecommunication, Chongqing 400065, China
5
Institute of Computer Technologies and Information Security, Southern Federal University, 347922 Taganrog, Russia
*
Authors to whom correspondence should be addressed.
Diagnostics 2022, 12(10), 2541; https://doi.org/10.3390/diagnostics12102541
Submission received: 25 September 2022 / Revised: 13 October 2022 / Accepted: 13 October 2022 / Published: 20 October 2022
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)

Abstract

:
Brain tumors (BTs) are deadly diseases that can strike people of every age, all over the world. Every year, thousands of people die of brain tumors. Brain-related diagnoses require caution, and even the smallest error in diagnosis can have negative repercussions. Medical errors in brain tumor diagnosis are common and frequently result in higher patient mortality rates. Magnetic resonance imaging (MRI) is widely used for tumor evaluation and detection. However, MRI generates large amounts of data, making manual segmentation difficult and laborious work, limiting the use of accurate measurements in clinical practice. As a result, automated and dependable segmentation methods are required. Automatic segmentation and early detection of brain tumors are difficult tasks in computer vision due to their high spatial and structural variability. Therefore, early diagnosis or detection and treatment are critical. Various traditional Machine learning (ML) techniques have been used to detect various types of brain tumors. The main issue with these models is that the features were manually extracted. To address the aforementioned insightful issues, this paper presents a hybrid deep transfer learning (GN-AlexNet) model of BT tri-classification (pituitary, meningioma, and glioma). The proposed model combines GoogleNet architecture with the AlexNet model by removing the five layers of GoogleNet and adding ten layers of the AlexNet model, which extracts features and classifies them automatically. On the same CE-MRI dataset, the proposed model was compared to transfer learning techniques (VGG-16, AlexNet, SqeezNet, ResNet, and MobileNet-V2) and ML/DL. The proposed model outperformed the current methods in terms of accuracy and sensitivity (accuracy of 99.51% and sensitivity of 98.90%).

1. Introduction

The brain and spinal cord are two main primary control centers of the human body; hence, any harm to this region is considered extremely dangerous. Tumors are the formations of abnormal tissue that can occur in any part of the human body [1]. One of the distinguishing features of brain cancer is abnormal development of cells of the brain or spinal cord. Because of their different natures, malignant and benign brain tumors require different treatments. Based on benignity and malignancy, the World Health Organization (WHO) has classified brain cancers into four major groups (Grade I–IV). Malignant brain tumor of grades III and IV grow quickly, metastasize (body-wide spread), and have a negative impact on healthy cells. Furthermore, the rigidity of the skull and the additional development of brain cells may result in pressure within the skull, which can harm the brain. Surgery, chemotherapy, radiation, and various treatment combinations are among the most advanced therapies available today. Even with the most intensive medical supervision, patients frequently do not live for more than 14 months [2]. As a result, early brain tumor detection is a crucial step in meticulously planning therapy. Patients’ chances of survival may improve if the BT diagnosis early. Computer tomography (CT) and magnetic resonance imaging (MRI) are two methods commonly used in the diagnosis and evaluation of brain tumors. MRIs use magnetic fields rather than X-rays, as in CT scans, to obtain a comprehensive image of the body. These techniques provide medical professionals with an in-depth view of information about the inside of the body, assisting them in determining the presence and location of symptoms. As a result, early BT detection and classification allows medical professionals to better plan appropriate therapy using MRIs as well as other imaging modalities [3]. Furthermore, pituitary tumors, gliomas, and meningiomas represent the primary subtypes of BT. Pituitary tumors are usually harmless and develop in the pituitary glands, which are located in the brain’s basal layer and are responsible for the production of many important hormones [4]. A glioma is a malignancy that develops when glial cells grow uncontrollably. Typically, such cells work in nerves and facilitate the function of the central nervous system. Gliomas typically develop in the brain, but they can also develop in the spinal cord [5]. Cancers known as meningiomas can develop on the membrane that serves as a protective covering for the brain and spinal cord [6]. Identification of BT requires the ability to differentiate between normal and abnormal brain tissue. Differences in shape, position, and size increase the difficulty of detecting BT, but it remains a challenge that must be overcome. BT analysis makes use of some of the most fundamental aspects of medical image processing, including classification, segmentation, and detection [7]. During the preliminary stages of treatment, the BT classification is a crucial step in identifying the specific kind of tumor that is currently present. The field of biomedical image processing has several cutting-edge computer-aided diagnostic tools to help radiologists with patient guidance and improve BT classification accuracy [8]. When high-grade tumors are present, brain tumors are a dangerous condition that significantly reduces a patient’s life expectancy. To be more specific, BT diagnosis is critical for treatments which have not improved the patient’s quality of life [9]. The proposed CAD system is intended to work under the supervision of a radiologist [10], and accurate tumor segmentation is required for better cancer identification.
In addition, the diagnosis of brain tumors in clinics relies on the visual inspection of patients by pathologists, who are highly trained experts in the field of neurosurgery. However, this procedure is done manually, which is not only time-consuming but also tedious and highly susceptible to pathologist error. This is due to the fact that the majority of cells typically comprise a portion of random, arbitrary, and uncontrolled visual angles.
It is essential to identify the type of tumor that a patient has, including whether it is glioma, pituitary, or meningioma. The purpose of this study is to differentiate between the three categories. Early diagnosis of the three types of BTs is essentially based on the neurosurgical point of view.
To address the information issues above, various ML/DL models for BT segmentation have been performed specially, deep transfer learning model (DTL) [11,12,13,14]. In DTL, the GoogleNet model has recently gained a lot of popularity in the field of medical imaging classification and grading. It is widely utilized for interacting with raw images and performing a better classification. Moreover, a transfer learning model such as AlexNet layers are used extensively in the process of extracting its features [15]. AlexNet is the most used deep transfer learning (TL) model, and the primary application it serves is image categorization [16]. Due to the different advantages of both models, the researchers were motivated to use the hybrid deep transfer model to improve the accuracy and performance of current algorithms in identifying different types of brain tumors, as well as to evaluate the approach using a dataset that was freely available to the public. Furthermore, the research question of this study is “how accurately and effectively can the hybrid TL system recognize and categorize various types of BT diseases?”. The following is a list of our key contributions to this research:
  • A precise Computer Aided Diagnosis system for BT is presented using deep learning.
  • A new hybrid deep learning approach, GN-AlexNet, is introduced for the classification of three types of brain tumors (pituitary, meningioma, and glioma). The proposed CAD system is thoroughly tested on a publicly available benchmark dataset of Contrast-Enhanced magnetic resonance images (CE-MRI).
  • In terms of accuracy and sensitivity, the proposed model performed significantly better than the existing techniques (with an accuracy of 99.51% and a sensitivity of 98.90%).
  • High classification performance has been achieved with the suggested model, together with decreased time complexity (ms). The GA-AlexNet classifier is used for successful BTs diagnosis in clinical and biomedical research.
The paper has the following outline. Section 2 discusses the literature review. Model specifications are presented in Section 3. Discussion and results are presented in Section 4. The final section presents the study’s findings and suggestions for future study.

2. Literature Review

Many machine learning and deep learning models [17,18,19,20,21,22,23,24,25] were used to classify and detect anomalies in biological images.The model presented in [26] outlines a process for the early diagnosis of brain tumors that involves the extraction and concatenation of many levels of features. Inception-v3 and DensNet201 are the three deep learning models that were used to verify this model before it was trained. These two models were used to investigate two distinct potential courses of action for the classification of BT. Using a dataset consisting of 253 images and pre-trained versions of VGG-16 and Inception-V3, a model for the automated identification of brain tumors was developed [26] and presented. This dataset includes 155 pictures of cancerous tumors and 98 images of healthy tissue. It was unable to fine-tune the CNNs using the dataset since it was not large enough, and the test dataset was also too small to check the performance of the model.
Using VGG-16 and the BRaTs dataset, a model for the automated identification of brain tumors was suggested [27]. Through the utilization of transfer learning and fine-tuning carried out across a total of 50 epochs, the accuracy of the model was brought up to 84%. Srivastava et al. presented a dropout strategy as a solution to the problem of overfitting in neural networks [28]. This technique involves randomly removing units and the connections between them.
P. Dvorak et al. in [29] selected a convolutional neural network as the learning method because it is effective at handling feature correlations using the freely available BRATS2014 data set, which contains three distinct multimodal segmentation tasks They evaluated the method, and they were able to obtain state-of-the-art results for the brain tumor segmentation data set, which included 254 multimodal volumes and required only 13 s per volume to process. Artificial Convolutional Neural Networks were implemented by Irsheidat et al. in their research paper [30]. The results indicated that the CNN model obtained satisfactory results.
In order to analyze MRI images, this model is capable of performing convolutional operations on the raw data. This neural network accurately predicts the presence of brain tumors because it was trained with MRI scans of 155 healthy brains and 98 tumors. Sravya et al. [31] researched the identification of brain tumors and provided several significant issues and methodologies. Using the YOLO model and the deep learning package FastAi with the BRATS 2018 dataset, which included 1992 MRI images of the brain, an automated brain tumor identification method was developed and investigated [32]. The accuracy of YOLO was measured at 85.95%, whereas the accuracy of the FastAi classification model was measured at 95.78%.
The VGG-16 model used for brain tumor detection was proposed [33] to identify MRI pictures as either tumorous or non-tumorous. The training was done using the Kaggle dataset, and the authors demonstrated an increase in accuracy as a result. Nevertheless, the authors trained the model in its entirety.
M.O. Khairandish et al. [34] proposed hybrid model combined CNN and support vector machine (SVM) for brain MRI image classification. The proposed study conducted comparative studies of segmentation with different DL model. The proposed approach performed best classification up to 99%.
For BT detection and classification, Swati et al. [35] implemented a CNN based block-wise-tuned (BFT) system. In comparison to the manually constructed features, the fine-tuned VGG-19 using the BPT approach obtained 94.84% classification accuracy in a shorter amount of training time. Kumar et al. [36] presented the Dolphin-SCA algorithm, which is a novel optimized DL method, for the detection and categorization of BT. Dolphin-SCA is an example of a novel optimized DL method. The mechanism makes use of a deep convolutional neural network. For the purpose of segmentation, the researcher employed a fuzzy deformable fusion model in conjunction with a sine cosine algorithm that was based on dolphin echolocation (DolphinSCA). A deep neural network that used Dolphin-SCA as its foundation and was based on power statistical features and LDP was used to make use of the retrieved features. The deep neural network was constructed using LDP. When using the method that was suggested, the accuracy of the categorization was found to be 96.3%. Deepak et al. [37] employed a pre-trained version of GoogleNet for feature extraction, and they relied on tried-and-true classifier models for both BT classifications. In comparison to the most recent and cutting-edge methodologies, the suggested technique achieved an accuracy of 98%. A hybrid model for the classification of BT was presented by Raja et al. [38], which takes into account a number of different backgrounds (namely, pre-processing by using a non-local express filter and segmentation by using the Bayesian fuzzy technique). After that, the scattering transform, the wavelet packet Tsallis entropy approach, and the theoretical measures were used to extract several aspects of the picture that had previously been measured. An accuracy of 98.5% was achieved in the classification process by utilizing a hybrid strategy that was found on a combination of a softmax and a deep autoencoder. Ramamurthy et al. [39] developed a novel method for the detection of BT that is based on DL. They have optimized the process using the Whale Harris Hawks framework. Following the segmentation of the tumors in the images using cellular automata, several parameters including the mean, size, kurtosis, and variance are retrieved. Following this, the components are classed for improved brain tumors identification using an optimization method that was provided by Whale Harris Hawks. Using a skull skipping algorithm, in conjunction with a support vector machine (SVM) classifier, the approach that was suggested achieved an accuracy rate of 81.6%, which was the highest it had ever achieved. The Berkeley wavelet transformation (BWT) and segmented characteristics were used by Bahadure et al. [40] which successfully identify brain tumors (BT) (contrast, texture, shape, and color). The skull skipping algorithm was utilized to accomplish this goal, which resulted in the removal of non-brain parts from MR images. The results of the experiment showed that there was a 96.51% chance that they were accurate. Waghmare et al. [41] used of several distinct CNN architectures in order to recognize and classify brain tumors. The classification accuracy of the expanded data set was improved by fine-tuning the VGG-16 algorithm, and it achieved the highest level of accuracy that was appropriate, which was 95.71%. This was the highest level of classification accuracy that could be achieved.
The current state of the art in using deep learning to classify MRI images of brain tumors shows the network’s higher performance in correctly and precisely classifying brain tissues, indicating its wider usage in this field and how transfer learning contributes to high accuracy in brain MRI segmentation. From the above literature its clearly shows that the TL model performs better than other DL models in brain tumor detection and classification. In order to further improve the classification performance of BTs, we used a hybrid transfer learning model of AlexNet and GoogleNet to create a hybrid deep learning model for the segmentation and classification of three types of brain tumors (pituitary, meningioma, and glioma).

3. Methodology

This section describes the methodology and the dataset, including the CE-MRI dataset and its pre-processing, as well as the proposed GN-AlexNet deep learning model for classifying CE-MRI data set images into BT tumor tri-classification. The main modules in the framework of developing and testing the proposed CAD system for BT classification are shows in Figure 1. Furthermore, the performance indicators (Accuracy, Precision, Sensitivity, Specificity) display the classification performance of the proposed model.

3.1. Brain Tumor CE-MRI Dataset

To conduct their research, the authors used a publicly available MRI dataset. [42]. The TJU Hospital in China gathered brain MRI scans from 262 patients. There are 3062 brain MRI images in total, including 1426 gliomas, 760 meningiomas, and 940 pituitary tumor images. Figure 2 shows examples of the various classes of BT images. The image is a 2D volume with 512 × 512 rs and a size of 0.49 × 0.49 mm2. The dataset format is available online in .mat in figshare. In this study, 2146 MRI images (70%) were used for training, while 918 were used for testing (30%). Table 1 contains detailed information about the CE-MRI data set, such as the number of images, patients, and class label for each type of brain cancer (glioma, pituitary, and meningioma) as shown in Figure 2.

3.2. Data Preprocessing and Augmentation

This dataset includes 3075 images of different types of BTs. Each image was transformed to a grayscale format. These data, which have been preprocessed, are one of the things that the neural network uses as input, along with the label of the image. Label 1 describes an image of a glioma; Label 2 depicts the pituitary gland, and Label 3 depicts a meningioma. However, a dataset consisting of 3075 MRI images is needed, in order to train the hybrid deep transfer learning (GN-AlexNet) model effectively, which has one million parameters. Data augmentation is the approach that needs to be taken to solve this issue. By rotating, scaling, and adding noise to the data that already exists, this method can be used to artificially increase the size of the data. The data can be magnified by zooming in on the image, rotating it horizontally or vertically by a predetermined angle, and adjusting the brightness range upwards or downwards, respectively. The MRI images have been augmented by utilizing all these methods. Because of the application of the augmentation methods, the data size was increased by a factor of 16, and as a result, the issue of overfitting was resolved [11].

3.3. Proposed Model

Within the scope of this study, we propose a hybrid transfer learning model that combines AlexNet and GoogleNet to produce a hybrid deep learning model for the segmentation and classification of three distinct types of brain tumors (pituitary, meningioma, and glioma). First, in this section, we go over AlexNet and GoogleNet, and then, the specifics of the hybrid GN-AlexNet deep learning model are described.

3.3.1. AlexNet

The AlexNet model was developed by Alex Krizhevsky et al. [43]. On 30 September 2012, AlexNet participated in ImageNet Large-Scale Visual Recognition Competition. The top-5 error rate for the network was 15.3%, which was more than 10.8 points lower than the runner-up. The primary finding of the original study was that the model’s depth was crucial for its greater efficiency, which was computationally expensive but made achievable by the use of graphics processing units (GPUs) in training, as seen in Figure 3.
In addition to this, elevating the total number of convolution layers in the AlexNet resulted in the production of features that were more specific, precise, and robust. In contrast to the first layer, which only recovered low-level features, the subsequent two convolutional layers were able to extract high-level characteristics. The Max-pooling layer helped to improve the accuracy towards the end of the network as shown in Figure 3.

3.3.2. GoogleNeT

The block of the GoogleNet and their fundamental layers are what makes up the hybrid deep (GN-AlexNet) model. The AlexNet layer is also a part of the model. Training a convolution network is a challenging task in the beginning, and the procedure can sometimes take as much as a few hours. Therefore, rather than beginning the process of training a new deep learning classifier, it would be preferable to train the proposed model using a classifier.
This would be the preferred method. Considering GoogleNet’s [16] success in the ILSVRC (2014) ImageNet competition, we decided to make it the cornerstone of our own research. In total, GoogleNet consists of 144 layers, with only 22 of those layers being learnable. Two convolution layers, four maximum pooling layers, one average pooling layer, two normalization layers, one completely connected layer, and nine inception layer modules are included in these layers. Additionally, each inception module contained one MXs layer in addition to the six CLS that were standard. The GoogleNet algorithm has been updated with a new input layer with dimensions of 224 × 224 × 1. The ReLU activation function was utilized within the framework of the pre-trained GoogLeNet methodology. Throughout the procedure, the ReLU activation function disregarded any values that were in the negative range and replaced them with zero. Leaky ReLU, an improved version of ReLU that replaces all negative values with positive ones, is another option [44]. During the process, the Deep Transfer learning model classifier that had initially been developed lost the last five layers of the GoogleNet classifier that it had been using. After they were removed, ten new layers were added in their place. In addition to this, the Leaky ReLU activation function was applied to the ReLU activation function that was present in the feature map layer to enhance the expressiveness of the model that was proposed and to discover a solution to the dying ReLU problem.
Three methods from the NIN (Network In Network)—inception modules, global average pooling, and the 11 Convolution—have been implemented in GoogleNet. Figure 4a depicts an inception module, which consists of a max pooling layer of size 33 and three convolutional layers of size 11, 33, and 55 that operate in tandem. The inception module accepts data from a lower layer, processes it using parallel operations, and sends the combined resultant feature maps on to the next layer. Using this method, you can expand your network. As seen in Figure 4b, an 11 convolution applied to the inception module’s internal layers significantly reduced the module’s processing requirements.
It was possible to do this without changing the fundamental structure of the convolution neural network. After these modifications were made, the total number of layers increased to 154, from 144 previously. A filter (patch) size of 8 × 8 was used in the first convolution layer, which greatly reduced the image size very soon. The 1 × 1 convolution block was used in the convolutional network’s second layer, which had a depth of 2. Dimensionality reduction was the goal, so this was done. In addition, GoogleNet’s inception module makes use of a number of convolution kernels, including 1 × 1, 3 × 3, and 5 × 5, to extract features at a range of granularities, from the tiniest details to the most fundamental aspects [45].
The greater the convolution kernel, the more surface area it covers while computing the features. Similarly, the 1 × 1 convolution kernel provides more information while also reducing the amount of processing required. Four convolutional layers with a very small filter size of 1 × 1 have been included as part of the recent updates as shown in Table 2.

3.3.3. The Hybrid GN-AlexNet Deep Learning Model

In addition, the ReLU activation function in the feature map layer was modified to improve the expressiveness of the proposed model and get over the dying ReLU issue. As a consequence of this, the proposed model was in a role to extract more comprehensive, discriminative, and deep features than the state-of-the-art pre-trained deep learning models that were mentioned earlier. This resulted in improved classification performance, which can be seen in Figure 5.
The proposed hybrid learning model (GN-AlexNet), has different layers including: the input, convolutional, activation function, normalization, Max_Pooling, fully connected, softmax, and classification layers. The input layer receives the images as input. In the GN-AlexNet learning model, the input images have a size of 224 × 224 × 1. Those three digits represent the image’s width, height, and channel size in grayscale (1 for grayscale images and 3 for color images). The images were first sent to an input layer before any further processing could begin. The convolutional layer involves a mathematical operation that requires two inputs: the input image matrix and a filter. The input image was multiplied by the filter, and a feature map was generated as an output. The mathematical expression of the convolution layer is as follows in Equation (1).
Z b a = i d c k a j k * y l c 1 + a d c
where K represents the number of layers, a d c shows bias, and the feature map is both layers represented by Z b a to c − 1 of the layer d.
The activation layer includes an activation function that gives nonlinearity to the neural network. Rectifier linear units (ReLUs) are used because they increase the training speed. Equation (2) shows the mathematical equation for the ReLU activation.
R ( x ) = { x                                         i f   x > 0 0                                         i f   x 0
To normalize the parameters generated by the proposed convolution layers, a batch normalization layer is applied to the outputs. The proposed model training period is shortened as a result of normalization, which makes the learning process both more effective and more rapidly accomplished.
The main limitation of the convolutional layer is that it captures features that depend on their location. Thus, if the location of the feature in the image changes slightly, the classification becomes inaccurate. Max-Pooling allows the network to overcome this limitation by making the representation more compact so that it is invariant to minor changes and insignificant details. Max pooling and average pooling were used to connect the features.
The fully connected layer receives the features learned in the convolutional layers. When a layer is said to be “fully connected”, it means that all of its nodes are linked to those in the next layer. This layer’s key focus is to label input images with their respective classes. A softmax activation function is used in this layer.
The Loss function (H) must be minimized during training. The output is calculated after the image has passed through all the previous layers. It is compared to the desired output using the loss function, and the error rate is calculated. This process is repeated for several iterations until the loss function is minimized. The loss function were used categorical cross-entropy (CCE). Equation (3) shows the mathematical equation for CCE.
H = H = m = 1 M y m l ·   k log y ^ m j
where y ^ m j represents predicted label and y m l represents target label of sample m among M number of samples.
The activation function causes further normalization of the fully connected layer’s output. The probabilistic computation is carried out by the network, and softmax generates the output in positive values for each category. The classification layer is the final layer of the model to be shown. This layer is responsible for generating the output by combining all of the inputs. A probability distribution was obtained as a consequence of the use of the softmax activation function. Table 2 lists the layers used in the proposed model, as well as their specifications.

3.4. Experimental Setup

This research implemented a wide range of extensive libraries while conducting experiments, including Tensorflow, Pandas, Numpy, and Keras. The proposed model is being trained on Keras while also running Python 3.6. To validate the performance of our proposed framework, analytical simulations are run on a computer equipped with a cori7-processor and a Graphical Processing Unit (GPU). These simulations are carried out using an Intel CPU.

3.5. Performance Evaluation Metrics

To assess the performance of brain tumor detection, evaluation metrics must be performed, including all of the framework’s available parameters. Although there is no standardized measure that can be used to classify performance matrices, the key performance matrices Accuracy (Acc), Sensitivity (Sens), Precision (Prec), and Specificity (Spec) are used relatively frequently.
This performance matrix was used to investigate the feasibility of the proposed model. The accuracy of a model is entirely dependent on several critical metrics, including the True Positive Rate (TPR) and the True Negative Rate (TNR), as well as the False Positive Rate (FPR) and the False Negative Rate (FNR). Equations (4)–(7) describe the key performance indicators that have been employed in this study.
Acc = TP + TN TP + TN + FP + FN
P r e c = TP + TN TP + FP
S e n s = TP + TN TP + FN
S p e c = 2 * TP 2 * TP + FP + FN

4. Result and Discussion

This section provides an evaluation of the GN-AlexNet models performance in comparison to other transfer learning models, including VVG-16, AlexNet, SqeezNet, ResNet, and Mobile Net V2, utilizing key performance indicators for the purpose of detecting and classifying brain tumors. Accuracy is an important component that demonstrates the specific class efficiency. Furthermore, the precision represents the ratio of accuracy in real-time tumor class prediction. While specificity is used to detect non-tumor classes. The Acc, Pres, Sens, and Spec of the proposed model and other transfer learning methods were compared in this study. Figure 6 compares the GN-AlexNet model’s attained Acc, Pres, Sens, and Spec to the other transfer learning models. As shown in the bar chart, the classification performance of the proposed model revealed that it outperformed other TL models in terms of Acc (99.10), Pres (99%), Sens (98.90%), and Spec (98.50%). As shown in the Figure 5, the SqueezNet model has the lowest performance measure.
When assessing how well an evaluation indicator performs, a confusion matrix can be used to quantify how well each class is classified. The proposed GN-AlexNet showed excellent tri-tumor type detection and accurate classification of each brain tumor type in this experiment, as measured by the accuracy of their confusion matrices. Figure 7 displays that the transfer learning model has the lowest performance measure.
The ROC curve is an essential tool for assessing whether a system is successful in the detection of brain tumors. Based on the ratio of the true positive rate (TPR) to the false positive rate (FPR), the ROC curve illustrates how well each classification can detect a given variable (TPR). Figure 8 shows the receiver operating characteristic (ROC) curve for the proposed model performs exceptionally well in comparison to other transfer learning methods.
To improve the proposed model even further, we compared it to the top-performing TL model using the confusion matrix’s key performance indicators: the True Positive Rate (TPR), True Negative Rate (TNR), and Matthews Correlation Coefficient (MCC) (Alexnet and MobileNet-V2). As can be seen in Figure 9, the TPR, TNR, and MCC values are all highest for the proposed model.

4.1. FDR, FNR, FOR, and FPR Analysis

In addition, the proposed hybrid DTL model outperforms the state-of-the-art transfer learning models on the current dataset across a wide range of performance metrics. These metrics include the false positive rate (FPR), the false negative rate (FNR), the false omission rate (FOR), and the false detection rate (FDR) are as follows:
F D R = FP FP + TP
F P R = FP FP + TN
F N R = FN FN + FP
F O R = 1 FN TN + FN
FDR = the proportion of patients who have a positive test result, despite the fact that the fundamental condition is negative, is a false discovery rate.
FPR = the number of individuals who have a condition that is known to be negative but for which the test nonetheless returns a positive result is known as the false positive rate.
FNR = the false negative rate can be defined as the proportion of individuals who have a known positive condition but have a negative test result for that disease.
FOR = The false omission rate can be defined as the proportion of individuals who have a negative test result despite the fact that they actually have a positive condition.
Figure 10 demonstrates that the proposed model possesses impressive performance metrics such as an FPR value of 0.0030%, FOR value of 0.0050%, FDR value of 0.00525%, and FNR value of 0.0012%.
The training time of a system is an important metric for assessing its performance because it measures how long it takes for a system to absorb the sustainability of its relative features. In this study, the proposed model’s training time was 16 ms, which is very short when compared to other transfer learning methods, as shown in Figure 11.

4.2. Comparative Results with Existing Benchmark

The GA-AlexNet that has been proposed has been evaluated alongside other outstanding benchmark algorithms such as LSTM, CNN, GRU, and many others. Its performance was evaluated using Acc, Pres, Sens, and Spec, as shown in Figure 12, so that its performance can be validated.
One of the more well-known applications of machine learning in the medical field is the classification of brain tumors. In order to create a reliable CAD system for these kinds of uses, many researchers and developers have looked into the problem and attempted to create one, with their findings published as a body of work. As shown in Table 3, the proposed system’s classification performance is compared to that of several state-of-the-art breast cancer detection systems. The purpose of this comparison is to gauge how well the modified system.

5. Conclusions

This research was conducted with the intention of classifying BTs tri-classification (pituitary, meningioma, and glioma), which was implemented across multiple layers of GoogleNet and AlexNet. The architecture of GoogleNet served as the basis for the development of the GN-AlexNet framework that was proposed. After removing the five layers of GoogleNet, 10 additional layers of AlexNet were added, which extract features and classify different types of BTs automatically. In addition, the ReLU activation function was modified to be a leaky ReLU activation function, but the core architecture of AlexNet was not changed.
On the same CE-MRI dataset, the proposed model was compared to transfer learning techniques (VGG-16, AlexNet, SqeezNet, ResNet, and MobileNet-V2) and ML/DL. The proposed hybrid TL model outperformed the current methods in terms of accuracy and sensitivity (accuracy of 99.51% and sensitivity of 98.90%). ROC (%), time complexity (%), and an extensive metrics approach (FNR, FPR, and MCC values) were used to compare the proposed techniques with previous TL methods and the latest ML/DL model. The proposed hybrid model diagnostic improves each BT class with improved classification performance and less detection time (%). In addition, a future study will show how the hybrid method performs with various data types, such as spotting signs of lung cancer, COVID-19 infection, and pneumonia. Furthermore, the proposed model also needs to be tested on big data.

Author Contributions

Conceptualization, N.A.S., M.S.A.M.A.-G., S.A. and M.S.A.M.; methodology, N.A.S., M.S.A.M.A.-G., S.A. and M.S.A.M.; software, M.S.A.M.A.-G., S.A. and M.S.A.M.; validation, N.A.S., M.S.A.M.A.-G. and S.A.; formal analysis, N.A.S., M.S.A.M.A.-G., S.A., M.S.A.M., G.A., H.A.A., M.A. and N.F.M.; investigation, N.A.S., M.S.A.M.A.-G., S.A., M.S.A.M., G.A., H.A.A., M.A. and N.F.M.; resources, N.A.S. and N.F.M.; data curation, M.S.A.M.A.-G., S.A. and M.S.A.M.; writing—original draft preparation, N.A.S., M.S.A.M.A.-G., S.A. and M.S.A.M.; writing—review and editing, N.A.S., M.S.A.M.A.-G., S.A., M.S.A.M., G.A., H.A.A., M.A. and N.F.M.; visualization, N.A.S., M.S.A.M.A.-G., S.A. and M.S.A.M.; supervision, N.A.S. and N.F.M.; project administration, N.A.S. and N.F.M. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project Number PNURSP2022R206, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available at https://www.mathworks.com/products/matlab.html (accessed on 10 September 2022).

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project Number PNURSP2022R206, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Van Meir, E.G.; Hadjipanayis, C.G.; Norden, A.D.; Shu, H.K.; Wen, P.Y.; Olson, J.J. Exciting new advances in neuro-oncology: The avenue to a cure for malignant glioma. CA A Cancer J. Clin. 2010, 60, 166–193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Shree, N.V.; Kumar, T.N. Identification and classification of BTMRI images with feature extraction using DWT and probabilistic neural network. Brain Inform. 2018, 5, 23–30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Saddique, M.; Kazmi, J.H.; Qureshi, K. A hybrid approach of using symmetry technique for brain tumors. Comput. Math. Methods Med. 2014, 2014, 712783. [Google Scholar] [CrossRef] [PubMed]
  4. Komninos, J.; Vlassopoulou, V.; Protopapa, D.; Korfias, S.; Kontogeorgos, G.; Sakas, D.E.; Thalassinos, N.C. Tumors are metastatic to the pituitary gland: Case report and literature review. J. Clin. Endocrinol. Metab. 2004, 2, 574–580. [Google Scholar] [CrossRef] [Green Version]
  5. DeAngelis, L.M. Brain tumors. N. Engl. J. Med. 2001, 344, 114–123. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Louis, D.N.; Perry, A.; Reifenberger, G.; Von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016 World Health Organization classification of tumors of the central nervous system: A summary. Acta Neuropathol. 2016, 132, 803–820. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Chahal, P.K.; Pandey, S.; Goel, S. A survey on brain tumors detection techniques for MR images. Multimed. Tools Appl. 2020, 79, 21771–21814. [Google Scholar] [CrossRef]
  8. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumors classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  9. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
  10. Wang, Y.; Zu, C.; Hu, G.; Luo, Y.; Ma, Z.; He, K.; Wu, X.; Zhou, J. Automatic tumor segmentation with deep convolutional neural networks for radiotherapy applications. Neural Process. Lett. 2018, 48, 1323–1334. [Google Scholar] [CrossRef]
  11. Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The one hundred layers tiramisu: Fully convolutional denseness for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 11–19. [Google Scholar]
  12. Zhang, Q.; Cui, Z.; Niu, X.; Geng, S.; Qiao, Y. Image segmentation with pyramid dilated convolution based on ResNet and U-Net. In Proceedings of the International Conference on Neural Information Processing, Guangzhou, China, 14 November 2017; pp. 364–372. [Google Scholar]
  13. Raza, A.; Ayub, H.; Khan, J.A.; Ahmad, I.; Salama, S.A.; Daradkeh, Y.I.; Javeed, D.; Ur Rehman, A.; Hamam, H. A Hybrid Deep Learning-Based Approach for Brain Tumor Classification. Electronics 2022, 11, 1146. [Google Scholar] [CrossRef]
  14. Ding, Y.; Zhang, C.; Lan, T.; Qin, Z.; Zhang, X.; Wang, W. Classification of Alzheimer’s disease based on the combination of morphometric feature and texture feature. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; pp. 409–412. [Google Scholar]
  15. Samee, N.A.; Atteia, G.; Meshoul, S.; Al-Antari, M.A.; Kadah, Y.M. Deep Learning Cascaded Feature Selection Framework for Breast Cancer Classification: Hybrid CNN with Univariate-Based Approach. Mathematics 2022, 10, 3631. [Google Scholar] [CrossRef]
  16. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  17. Harish, P.; Baskar, S. MRI based detection and classification of brain tumor using enhanced faster R-CNN and Alex Net model. Mater. Today Proc. 2020, 11, 495. [Google Scholar] [CrossRef]
  18. Ijaz, A.; Ullah, I.; Khan, W.U.; Ur Rehman, A.; Adrees, M.S.; Saleem, M.Q.; Cheikhrouhou, O.; Hamam, H.; Shafiq, M. Efficient algorithms for E-healthcare to solve multiobject fuse detection problem. J. Healthc. Eng. 2021, 2021, 9500304. [Google Scholar]
  19. Ahmad, I.; Liu, Y.; Javeed, D.; Ahmad, S. A decision-making technique for solving order allocation problem using a genetic algorithm. IOP Conf. Ser. Mater. Sci. Eng. 2020, 853, 012054. [Google Scholar] [CrossRef]
  20. Binaghi, E.; Omodei, M.; Pedoia, V.; Balbi, S.; Lattanzi, D.; Monti, E. Automatic segmentation of MR brain tumors images using support vector machine in combination with graph cut. In Proceedings of the 6th International Joint Conference on Computational Intelligence (IJCCI), Rome, Italy, 22–24 October 2014; pp. 152–157. [Google Scholar]
  21. Wang, X.; Ahmad, I.; Javeed, D.; Zaidi, S.A.; Alotaibi, F.M.; Ghoneim, M.E.; Daradkeh, Y.I.; Asghar, J.; Eldin, E.T. Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics 2022, 11, 2767. [Google Scholar] [CrossRef]
  22. Ahmad, S.; Ullah, T.; Ahmad, I.; Al-Sharabi, A.; Ullah, K.; Khan, R.A.; Rasheed, S.; Ullah, I.; Uddin, M.; Ali, M. A novel hybrid deep learning model for metastatic cancer detection. Comput. Intell. Neurosci. 2022, 2022, 8141530. [Google Scholar] [CrossRef] [PubMed]
  23. Ahmad, I.; Wang, X.; Zhu, M.; Wang, C.; Pi, Y.; Khan, J.A.; Khan, S.; Samuel, O.W.; Chen, S.; Li, G. EEG-based epileptic seizure detection via machine/deep learning approaches: A Systematic Review. Comput. Intell. Neurosci. 2022, 2022, 6486570. [Google Scholar] [CrossRef] [PubMed]
  24. Ullah, N.; Khan, J.A.; Alharbi, L.A.; Raza, A.; Khan, W.; Ahmad, I. An Efficient Approach for Crops Pests Recognition and Classification Based on Novel DeepPestNet Deep Learning Model. IEEE Access 2022, 10, 73019–73032. [Google Scholar] [CrossRef]
  25. Tufail, A.B.; Ullah, I.; Khan, W.U.; Asif, M.; Ahmad, I.; Ma, Y.K.; Khan, R.; Ali, M. Diagnosis of diabetic retinopathy through retinal fundus images and 3D convolutional neural networks with limited number of samples. Wirel. Commun. Mob. Comput. 2021, 2021, 6013448. [Google Scholar] [CrossRef]
  26. Khan, H.A.; Jue, W.; Mushtaq, M.; Mushtaq, M.U. Brain tumour classification in MRI image using convolutional neural network. Math. Biosci. Eng. 2020, 17, 6203–6216. [Google Scholar] [CrossRef] [PubMed]
  27. Amin, J.; Sharif, M.; Haldorai, A.; Yasmin, M.; Nayak, R.S. Brain tumour detection and classification using machine learning: A comprehensive survey. Complex Intell. Syst. 2021, 8, 3161–3183. [Google Scholar] [CrossRef]
  28. Srivastava, N.; Hinton, G.E.; Sutskever, I. A simple way to prevent neural networks from over fitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. Available online: https://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf (accessed on 10 September 2022).
  29. Dvorák, P.; Menze, B. Structured prediction with convolutional neural networks for multimodal brain tumour segmentation. In Proceedings of the MICCAI Multimodal Brain Tumour Segmentation Challenge (BraTS), Munich, Germany, 5–9 October 2015; pp. 13–24. Available online: http://people.csail.mit.edu/menze/papers/dvorak_15_cnnTumor.pdf (accessed on 10 September 2022).
  30. Irsheidat, S.; Duwairi, R. Brain Tumour Detection Using Artificial Convolutional Neural Networks. In Proceedings of the 2020 11th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 7–9 April 2020; pp. 197–203. [Google Scholar] [CrossRef]
  31. Sravya, V.; Malathi, S. Survey on Brain Tumour Detection using Machine Learning and Deep Learning. In Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 27–29 January 2021; pp. 1–3. [Google Scholar] [CrossRef]
  32. Dipu, N.M.; Shohan, S.A.; Salam, K.M.A. Deep Learning Based Brain Tumour Detection and Classification. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
  33. Gaikwad, S.; Patel, S.; Shetty, A. Brain Tumour Detection: An Application Based on Machine Learning. In Proceedings of the 2021 2nd International Conference for Emerging Technology (INCET), Belagavi, India, 21–23 May 2021; pp. 1–4. [Google Scholar] [CrossRef]
  34. Khairandish, M.O.; Sharma, M.; Jain, V.; Chatterjee, J.M.; Jhanjhi, N.Z. A hybrid CNN-SVM threshold segmentation approach for tumor detection and classification of MRI brain images. IRBM 2021, 43, 290–299. [Google Scholar] [CrossRef]
  35. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumors classification for MR images using transferlearning and fine-tuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [Google Scholar] [CrossRef] [PubMed]
  36. Kumar, S.; Mankame, D.P. Optimization drove deep convolution neural network for brain tumors classification. Biocybern. Biomed. Eng. 2020, 40, 1190–1204. [Google Scholar] [CrossRef]
  37. Deepak, S.; Ameer, P.M. Brain tumors classification using in-depth CNN features via transfer learning. Comput. Biol. Med. 2019, 111, 103345. [Google Scholar] [CrossRef] [PubMed]
  38. Raja, P.S. Brain tumors classification using a hybrid deep autoencoder with Bayesian fuzzy clustering-based segmentation approach. Biocybern. Biomed. Eng. 2020, 40, 440–453. [Google Scholar] [CrossRef]
  39. Ramamurthy, D.; Mahesh, P.K. Whale Harris Hawks optimization-based deep learning classifier for brain tumors detection using MRI images. J. King Saud Univ. Comput. Inf. Sci. 2020, 32, 1–14. [Google Scholar]
  40. Bahadure, N.B.; Ray, A.K.; Thethi, H.P. Image analysis for MRI-based brain tumors detection and feature extraction using biologically inspired BWT and SVM. Int. J. Biomed. Imaging 2017, 2017, 9749108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Waghmare, V.K.; Kolekar, M.H. Brain tumors classification using deep learning. In The Internet of Things for Healthcare Technologies; Springer: Singapore, 2021; Volume 73, pp. 155–175. [Google Scholar]
  42. Resize Function. Available online: https://www.mathworks.com/products/matlab.html (accessed on 10 September 2022).
  43. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  44. Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical evaluation of rectified activations in convolutional network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
  45. Bai, J.; Jiang, H.; Li, S.; Ma, X. Nhl pathological image classification based on hierarchical local information and googlenet-based representations. BioMed Res. Int. 2019, 2019, 1065652. [Google Scholar] [CrossRef] [Green Version]
  46. Hossain, T.; Shishir, F.S.; Ashraf, M.; Al Nasim, M.A.; Shah, F.M. Brain tumor detection using convolutional neural network. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019. [Google Scholar]
  47. Tiwari, P.; Pant, B.; Elarabawy, M.M.; Abd-Elnaby, M.; Mohd, N.; Dhiman, G.; Sharma, S. Cnn based multiclass brain tumor detection using medical imaging. Comput. Intell. Neurosci. 2022, 2022, 1830010. [Google Scholar] [CrossRef] [PubMed]
  48. Kumar, T.S.; Rashmi, K.; Ramadoss, S.; Sandhya, L.K.; Sangeetha, T.J. Brain tumor detection using SVM classifier. In Proceedings of the 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS), Chennai, India, 4 May 2017. [Google Scholar]
Figure 1. The block diagram of the framework for BT classification using the proposed GN-AlexNet deep learning model.
Figure 1. The block diagram of the framework for BT classification using the proposed GN-AlexNet deep learning model.
Diagnostics 12 02541 g001
Figure 2. MRI images of three categories (ac) (glioma, pituitary, and meningioma).
Figure 2. MRI images of three categories (ac) (glioma, pituitary, and meningioma).
Diagnostics 12 02541 g002
Figure 3. The basic block diagram of AlexNet model.
Figure 3. The basic block diagram of AlexNet model.
Diagnostics 12 02541 g003
Figure 4. GoogleNet’s inception module. (a) Inception module with no convolution layer. (b) A 1 × 1 convolution layer in inception “Reprinted with permission from Ref. [15]. 2022, Samee et al.”.
Figure 4. GoogleNet’s inception module. (a) Inception module with no convolution layer. (b) A 1 × 1 convolution layer in inception “Reprinted with permission from Ref. [15]. 2022, Samee et al.”.
Diagnostics 12 02541 g004
Figure 5. The proposed GN-AlexNet.
Figure 5. The proposed GN-AlexNet.
Diagnostics 12 02541 g005
Figure 6. The performance evaluation of the proposed GN-AlexNet model versus the transfer learning (pretrained models).
Figure 6. The performance evaluation of the proposed GN-AlexNet model versus the transfer learning (pretrained models).
Diagnostics 12 02541 g006
Figure 7. Presents the confusion matrix of the proposed transfer learning techniques in brain tumor detection.
Figure 7. Presents the confusion matrix of the proposed transfer learning techniques in brain tumor detection.
Diagnostics 12 02541 g007
Figure 8. The ROC of the proposed model with transfer learning techniques in brain tumor detection.
Figure 8. The ROC of the proposed model with transfer learning techniques in brain tumor detection.
Diagnostics 12 02541 g008
Figure 9. Bar chart of the attained values of the TPR, TNR, and MCC of the proposed model and the other TL models.
Figure 9. Bar chart of the attained values of the TPR, TNR, and MCC of the proposed model and the other TL models.
Diagnostics 12 02541 g009
Figure 10. The attained values of the FDR, FNR, FPR, and FOR of the proposed model and the other TL models.
Figure 10. The attained values of the FDR, FNR, FPR, and FOR of the proposed model and the other TL models.
Diagnostics 12 02541 g010
Figure 11. The time complexity of the proposed transfer learning techniques in brain tumor detection.
Figure 11. The time complexity of the proposed transfer learning techniques in brain tumor detection.
Diagnostics 12 02541 g011
Figure 12. Comparing the attained results of the proposed CAD system for BT classification with state-of-art DL techniques.
Figure 12. Comparing the attained results of the proposed CAD system for BT classification with state-of-art DL techniques.
Diagnostics 12 02541 g012
Table 1. A description of the brain data set.
Table 1. A description of the brain data set.
Tumor ClassImages#PatientsClass LabelsMRI-ImagesTesting DataTraining Data
Glioma1427901AX(494),CO(437),SA(496)999428
Pituitary940632AX(291),CO(319),SA(320)652278
Meningioma708833AX(209),CO(268),SA(231)495213
Total307536 2146918
Axial = AX, Coronal = CO, and Sagittal = SA.
Table 2. The specifications of the layers in the proposed learning model, GN-AlexNet.
Table 2. The specifications of the layers in the proposed learning model, GN-AlexNet.
LayerFilter SizeNo of FilterEpsilon
Convolution Layer1 × 19400.002
Batch_norm_layer--0.001
Soft-Max Layer
Clip_ReLU_layer
Group_Conv_layer3 × 3940
Clip_ReLU_layer 0.002
Convolution Layer1 × 1300
Convolution Layer1 × 11260
Batch_norm_layer 0.002
Glob_AVG_P_layer
FC layer
SoftMax
Classification Layer
Table 3. Comparative study of the proposed model with recent state-of-art methods.
Table 3. Comparative study of the proposed model with recent state-of-art methods.
ReferencesMethodsAcc (%)Prec (%)Sens (%)Spec (%)
This work Proposed model99.19998.998
[46]CNN91.690.889.989.5
[47]BWT+SVM95.994.693.893.6
[48]SVM, KNN96.895.29594.6
[11]AlexNet94.693.69392.4
[47]GA-CNN93.992.59291.9
[31]M-SVM96.89695.895
[29]ANN94.793.59392.50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Samee, N.A.; Mahmoud, N.F.; Atteia, G.; Abdallah, H.A.; Alabdulhafith, M.; Al-Gaashani, M.S.A.M.; Ahmad, S.; Muthanna, M.S.A. Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model. Diagnostics 2022, 12, 2541. https://doi.org/10.3390/diagnostics12102541

AMA Style

Samee NA, Mahmoud NF, Atteia G, Abdallah HA, Alabdulhafith M, Al-Gaashani MSAM, Ahmad S, Muthanna MSA. Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model. Diagnostics. 2022; 12(10):2541. https://doi.org/10.3390/diagnostics12102541

Chicago/Turabian Style

Samee, Nagwan Abdel, Noha F. Mahmoud, Ghada Atteia, Hanaa A. Abdallah, Maali Alabdulhafith, Mehdhar S. A. M. Al-Gaashani, Shahab Ahmad, and Mohammed Saleh Ali Muthanna. 2022. "Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model" Diagnostics 12, no. 10: 2541. https://doi.org/10.3390/diagnostics12102541

APA Style

Samee, N. A., Mahmoud, N. F., Atteia, G., Abdallah, H. A., Alabdulhafith, M., Al-Gaashani, M. S. A. M., Ahmad, S., & Muthanna, M. S. A. (2022). Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model. Diagnostics, 12(10), 2541. https://doi.org/10.3390/diagnostics12102541

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop