Next Article in Journal
Factors Influencing Employees’ Information Security Awareness in the Telework Environment
Next Article in Special Issue
Handling Missing Values Based on Similarity Classifiers and Fuzzy Entropy Measures
Previous Article in Journal
Teaching–Learning-Based Optimization Algorithm Applied in Electronic Engineering: A Survey
Previous Article in Special Issue
A Deep Learning-Based Approach for the Diagnosis of Acute Lymphoblastic Leukemia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain Tumor Classification and Detection Using Hybrid Deep Tumor Network

by
Gehad Abdullah Amran
1,2,*,
Mohammed Shakeeb Alsharam
3,
Abdullah Omar A. Blajam
4,
Ali A. Hasan
5,
Mohammad Y. Alfaifi
6,
Mohammed H. Amran
7,
Abdu Gumaei
8 and
Sayed M. Eldin
9,*
1
Department of Management Science and Engineering, Dalian University of Technology, Dalian 116024, China
2
Department of Information Technology, Faculty of Computer Sciences and Information Technology, Al-Razi University, Sana’a 216923, Yemen
3
Department of Computer Science Information Engineering, College of Computer Science and Technology, Tianjin Agriculture University, Tianjin 061102, China
4
College of Computer Science, Zhejiang Normal University, Jinhua 321004, China
5
College of Software Engineering, Northeastern University, Hunnan, Shenyang 110169, China
6
Biology Department, College of Science, King Khalid University, Abha 62529, Saudi Arabia
7
Faculty of Medicine, Ogarev Mordovia State University in Saransk, Bol’shevistskaya Ulitsa 68, 430005 Saransk, Russia
8
Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
9
Center of Research, Faculty of Engineering, Future University in Egypt, New Cairo 11835, Egypt
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(21), 3457; https://doi.org/10.3390/electronics11213457
Submission received: 29 September 2022 / Revised: 15 October 2022 / Accepted: 19 October 2022 / Published: 25 October 2022
(This article belongs to the Special Issue Intelligent Data Sensing, Processing, Mining, and Communication)

Abstract

:
Brain tumor (BTs) is considered one of the deadly, destructive, and belligerent disease, that shortens the average life span of patients. Patients with misdiagnosed and insufficient medical treatment of BTs have less chance of survival. For tumor analysis, magnetic resonance imaging (MRI) is often utilized. However, due to the vast data produced by MRI, manual segmentation in a reasonable period of time is difficult, which limits the application of standard criteria in clinical practice. So, efficient and automated segmentation techniques are required. The accurate early detection and segmentation of BTs is a difficult and challenging task in biomedical imaging. Automated segmentation is an issue because of the considerable temporal and anatomical variability of brain tumors. Early detection and treatment are therefore essential. To detect brain cancers or tumors, different classical machine learning (ML) algorithms have been utilized. However, the main difficulty with these models is the manually extracted features. This research provides a deep hybrid learning (DeepTumorNetwork) model of binary BTs classification and overcomes the above-mentioned problems. The proposed method hybrid GoogLeNet architecture with a CNN model by eliminating the 5 layers of GoogLeNet and adding 14 layers of the CNN model that extracts features automatically. On the same Kaggle (Br35H) dataset, the proposed model key performance indicator was compared to transfer learning (TL) model (ResNet, VGG-16, SqeezNet, AlexNet, MobileNet V2) and different ML/DL. Furthermore, the proposed approach outperformed based on a key performance indicator (Acc, Recall, Precision, and F1-Score) of BTs classification. Additionally, the proposed methods exhibited high classification performance measures, Accuracy (99.51%), Precision (99%), Recall (98.90%), and F1-Score (98.50%). The proposed approaches show its superiority on recent sibling methods for BTs classification. The proposed method outperformed current methods for BTs classification using MRI images.

1. Introduction

A brain tumor (BTs) is an abnormal growth of brain cancerous cells. Usually, unregulated and aberrant cell division is the root cause brain tumor. Primary brain tumors are divided into two categories: benign(healthy) and malignant(unhealthy) [1]. Tumors that are benign can be removed easily, and they rarely reappear.
Furthermore, principal and secondary brain tumors, often known as benign(healthy) and malignant brain(unhealthy)tumors, are the two major classifications of tumors [2]. Primary brain tumors develop from cells that are already present in the brain, whereas secondary brain tumors develop from cancer cells that have spread from other regions of the body. Tumors that are benign develop slowly and disclose identifiable borders; their removal is determined by the area of the brain in which they are located. Malignant brain tumors, on the other hand, have a rapid growth rate, are dangerous, and do not have obvious and exact margins due of their creeping root ability to spread to the tissues that are adjacent.
Clinical procedures performed by doctors for primary diagnosis may include a physical examination, biopsy testing, and digital screening. After the physical examination and the review of the patient’s history, the next step is imaging of the brain. Imaging the brain is essential since the brain is composed of tissues that are extremely sensitive and fragile. There are many other medical imaging modalities, but magnetic resonance imaging (MRI) is the most effective technique for detecting abnormalities in brain regions and works well on soft tissues [3]. Perfusion magnetic resonance imaging (fMRI), computed tomography (CT), positron emission tomography (PET) is some of the other imaging technologies available. Effective therapy for a brain tumor, which may include chemotherapies or operations depending on the patient’s health, can be directed toward the tumor’s location and status if it is identified promptly and accurately [4,5,6].
Principal and secondary brain tumors, often known as benign (healthy) and malignant brain(unhealthy)tumors, are the two major classifications of tumors. Primary brain tumors develop from cells that are already present in the brain, whereas secondary brain tumors develop from cancer cells that have spread from other regions of the body. Tumors that are benign develop slowly and disclose identifiable borders; their removal is determined by the area of the brain in which they are located. Malignant brain tumors, on the other hand, have a rapid growth rate, are dangerous, and do not have obvious and exact margins due of their creeping root ability to spread to the tissues that are adjacent.
Clinical procedures performed by doctors for primary diagnosis may include a physical examination, biopsy testing and digital screening. After the physical examination and the review of the patient’s history, the next step is imaging of the brain. Imaging the brain is essential since the brain is composed of tissues that are extremely sensitive and fragile. There are many other medical imaging modalities, but magnetic resonance imaging (MRI) is the most effective technique for detecting abnormalities in brain regions and works well on soft tissues. Computed tomography (CT), perfusion magnetic resonance imaging (MRI), functional magnetic resonance imaging (MRI), positron emission tomography (PET) is some of the other imaging technologies available. Effective therapy for a brain tumor, which may include chemotherapies or operations depending on the patient’s health, can be directed toward the tumor’s location and status if it is identified promptly and accurately [7,8,9]
The manual diagnosis of BTs is laborious, time-consuming, arbitrary, and lacks precision; in contrast, in order to save people’s lives, we want a quantitative diagnosis that is both early and accurate. In addition to this, in order to provide appropriate therapy, medical professionals need an accurate measurement of the tumor’s location [10,11,12,13].
Various studies have been conducted [14,15], for BT classification using ML/DL. ML approaches such as support vector machines (SVM) [16,17], k-nearest neighbor (KNN) [18], artificial neural networks (ANNs) [19], principal component analysis (PCA), and decision trees [20,21] have been introduced in the literature. Furthermore, these systems rely on manual feature extraction, whereas the training process required the retrieval of mean features. As a result, the detection and classification accuracy are dependent on the key features. Building classifiers with machine learning (ML) requires a lot of memory and processing time when working with large datasets and have less classification accuracy (%) [22,23,24]. Furthermore, CNN layers are frequently employed mostly in the extraction of its features [23]. Since each neuron in an ANN is coupled to some other neuron, these networks may extract information as well. The farther we delve into deep learning, the more deeply linked the layers become, enabling them to perform superior in medical imaging. CNN, for instance, is the most often applied DL model, with its primary application being the classification of images of brain tumors [25]. Additionality, it is always competent to use two hybrid model for better classification and detection. Various researchers used hybridization with CNN model in biomedical imagining got satisfactory results [24,25,26,27]. As a result, we were inspired to utilize the hybrid strategy to increase the accuracy and performance of current model in recognizing various types of BTs. To do this, we presented a hybrid variant of the DeepTumorNet model for identifying and classifying BTs into normal and BTs. In this method, the deep learning mechanism is employed to extract features, and a SoftMax classifying layer is utilized to account for heterogeneity. In comparison to the traditional technique ResNet [26,27], MobileNet V2 [28,29,30,31,32,33,34], Alex Net [27,28,29,30], Squeeze Net [31,33], VGG-16 [31,34,35] the Kaggle dataset of brain tumors can be accessed by the general public via figshare, the proposed model achieved the greatest BTs classification accuracy recorded. Furthermore, the goal of the research project that we have proposed is to attempt to answer the given question:
How precisely and effectively can the DeepTumorNet model detect and categorize distinct forms of BT diseases?
Our key contribution to this study is as follows:
  • We propose a hybrid DL-TL model to identify two different kinds of brain malignancies (brain tumor) and (non-brain tumor (healthy).
  • The proposed TL-DL detection technique shows superiority over current methods and has the highest accuracy on the Kaggle dataset. A huge number of tests are done with four distinct pre-trained DL models using TL strategies. Furthermore, in order to reveal the effectiveness of prediction performance of the proposed methods, compared with recent ML/DL and transfer learning model.
The organization of the paper is as follows. The introduction and background information works are discussed in Section 1 and Section 2, respectively. While the data processing and proposed model are presented in Section 3. Section 4 presents an experimental study and discussion and results. Section 5 contains the conclusion and future study.

2. Related Works

For BTs categorization and detection, various ML/DL models were employed [36]. DL model plays an important role in detection and classification in different areas [37,38,39,40,41,42,43,44,45]. In the literature, several alternative ways of identifying and classifying BTs have been established using magnetic resonance (MR) FLAIR images. Zeineldin et al. [46] developed a DNN technique for automated BTs segmentation. Their concept is comprised of two interconnected core components, one for encoding and the other for decoding. A CNN is specialized in extracting spatial information from the encoder part. A CNN is devoted to extracting meaningful features in the encoder section. The generated semantic map is then fed into the decoder part to produce the comprehensive probability map. The ResNet and Dense Net were investigated in the final stage. Resnet-50 of the TL, was used to identify BTs. Their experimental results were 95% accurate. In related work, Nawab et al. [47] used block-wise transfer learning to obtain a 5-fold cross-validation and they achieved 94.82%. To validate their approach, they employed a benchmark dataset based on T1-weighted contrast-enhanced magnetic resonance imaging (CEMRI). Furthermore, Sarmad Maqsood et al. [48] implemented fuzzy logic and U-NET CNN model for binary segmentation and classification of BTs. They proposed that the model performed better as compared to other sibling methods.
The detection accuracy attained with this conceptual framework was 97.5%. In [49], Mircea et al. extracted wavelet coefficients from images using a feature-based technique. The authors contend that wavelet transforms have a temporal resolution edge over Fourier transforms, allowing to locate the location coordinates frequency of the images. As a classifier, a technique is used for 91% accuracy. V Rajinikanth et al. [50] show CADD system with CNN model for segmentation and classification. They explained and investigated different CADD systems. After the evaluation and investigation, the SVM model performed 97% accuracy using 10-fold cross validation.
Before this model could be trained, it was put through a validation process using the deep learning algorithms Inception-v3 and DensNet201.They got 89% accuracy.
This collection has 155 illustrations of malignant BTs, normal and healthy tissue. Furthermore, It was not possible to the CNNs fine-tune by utilizing the dataset was small in size and the testing set was also insufficient to verify the correctness of the proposed model.
A model for the automatic classification of BTs was proposed using VGG-16 and the BRaTs dataset [51]. B Badjie et al. [52] implemented DCNN model learning for binary MRI image segmentation and classification, the proposed Alex Net CNN model indicated best accuracy up to 90%.
P. Dvorak and colleagues selected the convolutional neural network as the learning approach in [53] due to its ability to cope with feature correlation. They put the technique through its tests on the publicly accessible data set (BRATS2014), which includes three separate segmentations of multimodal tasks. As a consequence, they were able to acquire cutting-edge findings for the data set of BT segmentation, which contained 254 volumes of multimodal and implemented only thirteen s to process every volume.
S. Irsheidat and colleagues created a model based on ANN in their paper referred to as [54]. This model is capable of taking magnetic resonance images and analyzing them using matrix operations and mathematical formulas. To generate reliable predictions concerning the presence of brain cancers, this neural network was trained using magnetic resonance images from 155 healthy brains and 98 tumors. The network was trained using these images. Magnetic resonance imaging was used to produce 253 pictures in the collection.
Sravya et al. [55] investigated the detection of BTs and presented various important topics and approaches. Dolphin-SCA is a unique optimized DL approach for the identification and classification of BT described by Kumar et al. [56]. A deep CNN is used to power the process by various researchers used a fuzzy based model in conjunction with a dolphin echolocation-based sine cosine method for segmentation (DolphinSCA). The obtained characteristics were utilized in a deep neural network that was built on power statistical features and LDP and used Dolphin-SCA as its basis. The proposed technique obtained the highest accuracy rate of 81.6% ever. S Maqsood et al. [57] introduced support vector machine (SVM) with DCNNs for muti-model BTs detection with 96% accuracy. Waghmare et al. [58] identified and classified BTs using a range of CNN architectures. All the mentioned problem has issue of classification performance of BTs images, which is resolved using the proposed model deep tumor network.

3. Methodology

This section shows the proposed work of the deep tumor network including two major steps, data processing (data collection, data augmentation, and class labeling), and the second step is the training approaches of the suggested methods and the process to classify the Kaggle (Br35H) image dataset into the tumor and non-tumors class as shown in Figure 1 and Figure 2. In addition, the proposed model’s performance has also been assessed using the major performance indicators (Acc, Recall, Prec, and F1-Score).

3.1. Brain Tumor Kaggle Dataset

The experiments described in this study were performed by utilizing a publicly accessible dataset acquired from a Kaggle (Br35H) [59]. This dataset consisted of 1500 brain MRI images with tumors and 1500 brain MRI images without tumors. All images were two-dimensional and had a height and width of 256 × 256 pixels. All images were skull-stripped and labeled yes if they contained a tumor and no if they did not. Figure 1 shows the dataset of images with and without tumors labeled yes and no, respectively. The descriptions of the training and testing datasets are listed in Table 1, Figure 3 and Figure 4.

3.2. Preprocessing of the Dataset

The publicly accessible Kaggle dataset contains a total of 3000 images, including 1500 brain tumor images while 1500 normal images have no brain tumor. All images must be converted into 224 × 224 pixels. As a result, the dataset was pre-processed such that it would be appropriate for the proposed methods. After the MRI scans images were first normalized (that is, converted from mat to .jpg format), they were resized by making use of the resize function that is available in Python, according to the image input sizes utilized by the deep learning model that we have proposed as well as by other pretrained models. As a result, the MRI pictures were scaled down to 224 by 224 pixels, while the DarkNet19 images were scaled down to 256 by 256 pixels. In addition, the images in the dataset were divided into two groups: 90% were used for training, and 10% were used for testing also using 10-fold cross validation. Additionally, the image data were labeled 0 for normal while 1 for brain tumor detection was provided as input data for the proposed model.

3.3. Data Augmentation

The Kaggle dataset (Br35H) included 3000 images that were insufficient and need data augmentation for size increasing, scaling, and rotating of the images and add noise. The images were vertically and horizontally zoomed in at certain angles and increasing the brightness which improve the training and classification performance of the proposed model. Additionally, each image of the Kaggle dataset was augmented 17 times of the original dataset to avoid the overfishing issue [11]. Some of the data augmentation techniques used in this research are as follows;
  • Position augmentation
    In this process the position of the brain MRI images pixel is changed.
  • Scaling
    In scaling process, the brain images are resized.
  • Cropping
    In cropping process, a small portion of the brain MRI images is selected; here in this study we selected the center of the brain images.
  • Brightness
    In this step the brightness of the brain images is changed from original to a lighter one.

3.4. Row Major Order

In row-major order, the images of multi-dimensional arrays are stored in a single row to ease the computing processing, because the RGB and Greyscale images are complex multi-dimensional and need more computing resources.

3.5. Proposed Model

The hybrid DeepTumorNet model consists of CNN as a fundamental model hybrid with GoogleNet. Initially, to train the CNN model may occasionally take a few days and is a difficult task [30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]. It is preferable to first train the proposed model implements the TL model before hybridizing the CNN model. Additionally, in this study, we implemented Google Net [26] model as the foundational model because the model won ILSVRC (2014) ImageNet competition. The basic Google Net model consists of a total of 22 layer, including convolution layers, average pooling layer (APS), normalization layers Global max-pooling layers (GMXs), inception layers module, and fully connected layer (FCLs). The input data of the Goo consist of 224 × 224 dimensions. A new input layer with the dimensions 224 × 224 by 1 was implemented in GoogleNet. Within the context of the pre-trained google Net method, the ReLU AF was used. While this was going on, the ReLU activation function ignored any negative values and substituted zero for them. The input image of the Google Net consists of the 224 × 224 size with the ReLU activation function (RAF). The RAF used to ignore the negative values and replace them with 0. Furthermore, ReAF has upgraded to Leaky ReLU where all the negative values were replaced with positive ones. Furthermore, in the Google Net, the last 5 layers were replaced by 14 additional layers of the CNN model. In the CNN layers, ReAF was also replaced by Leaky ReAF. These changes were accomplished without changes to the primary structure of the proposed model. After adding these layers, the total number of layers was 27 layers.
For the first layers of CLs, the image size was shuffled and the filter size was 8 × 8. The second had two deep CLs, consists of a 1 × 1 convolution block. It achieved 1 × 1 dimensionality reduction and a decrease in the number of parameters. Google Net consists of different inception model that has various convolution kernels (CKs) size such as 1 × 1 to 5 × 5 with different features. At the beginning of the process, the important feature was extracted. Similarly, the 1 × 1 CKs reduce processing time as well as prove enough information as described in Table 2. In addition, CLs show more robust and precise information about the feature, because the first layers of CLs extracted minute features while the four CLs extracted high-level features. Furthermore, the addition of GaPLs improved the validation accuracy of the proposed model. In addition, the Leaky ReAFs were used to improve the proposed model expressiveness and solved the issue ReLU, which resulted in improved classification performance as shown in Figure 4. Due to these layers, the proposed model was able to extract the most important, deep, and discriminative features which improve the classification performance as compared to other recent state-of-the-art methods and ML/DL models.

3.6. Input Image Data

The input image of the proposed DeepTumorNet model possesses the size of the image which starts from the image layer. In this study, the image size 224 ×224 × 1 was provided with the grayscale gradient, which shows the width, height, and corresponding channel size including one image of a grayscale consisting of three colors (red, green, and blue). In the initial training process, these images were passed through the input layers.
  • Convolutional layer
    In this layer, the two major inputs were image filter and matrix. The mathematical operation involved multiplying filter of the image generating input of the feature map.
  • Activation layer
    In this layer, the rectifier linear units (ReLUs) were used, which speeds up the training process and gives nonlinearity to the network model. The mathematical expression of the activation function is shown in Equation (1).
    ReLU_Act_Function (y) =     y     if y > 0
    =    0     if y < 0.
    In the case of positive inputs (y), ReLU action function returns the value (y) as the output. However, when dealing with negative inputs, it returns a much smaller number that is equal to 0.01 times y. As a result, in this scenario, no neuron is rendered inactive, and we will no longer come across neurons that have died.
  • Batch normalization layers
    The outputs that were created by the suggested convolution layers were used, and the batch normalization layer was applied to normalize them. The training duration of the recommended proposed model is shortened as a consequence of normalizing, which makes the process of learning both more efficient and more rapidly achieved. Normalization also makes the training period shorter.
  • Pooling layer
    The convolutional layer’s primary limitation is that it only captures the location-dependent features. Therefore, the categorization ends up being inaccurate if there is even a little shift in the position of the feature inside the image. By rendering the image more compact through the process of pooling, the network is able to bypass this constraint. As a result, the representation is now invariant to relatively few changes and particulars. Absolute pooling and average pooling were applied so that the characteristics might be linked to one another.
  • Fully connected layer
    In this layer, the features that were generated from the CLs are fed into the FC layers. In the FC layer, every node is connected with another node and makes the relation between an input image and its associate’s class. This layer implements SoftMax activation.
  • Loss function
    During training, this function (Y) must be reduced. After the image has been processed through all of the preceding layers, the output is calculated. The error rate is computed after comparing it to the expected outcome using the loss function. This technique is performed several times till its loss function is reduced. We used the binary cross-entropy as our loss function (BCE). The mathematical expression for BCE is shown in Equation (2).
    B C E   L o s e = 1 n i = 1 N ( y i × log ( p i ) + ( 1 y i ) × log ( 1 p i )
    In binary classification, the actual value of y may only take on one of two potential forms: either 0 or 1. Therefore, in order to accurately determine the loss between the expected and actual results, it is necessary to compare the actual value, which can either be 0 or 1, with the probability that the input lines up with that category (where p(i) is the probability that the category is 1, and 1 − p(i) is the probability that the category is 0).
  • SoftMax layer
    The FC layer’s outcomes are more normally distributed because of the activation function. SoftMax performs the probabilistic computation for the network and generates work in positive values for each class.
  • Classification Layer
    The classification layer is indeed the model’s final layer to be demonstrated. This layer is utilized to generate the output by merging each input. As a consequence of the SoftMax AF, a posterior distribution was obtained [34].
  • Grid search Hyperparameter optimization
    Grid search hyperparameter is optimization approach that will methodically build and evaluate a model for each combination of algorithms parameters specified in a grid. In this problem, we tune the hypermeters by using grid search to find out the optimal hypermeters-based best classification performance. Furthermore, the grid search has optimal hyperparameter including epoch size = 100, Epsilon from 0.002, filter size = 1 × 1, batch size = 100 and the learning rate = 0.009. Furthermore, grid search optimization also used 10-fold cross validation. In 10-fold cross validation all the process, both the training and the test would be carried out only once within each set (fold). In order to avoid overfitting, 10-fold cross validation is the best technique to be used. k-fold validation reduces this variance by averaging over k different partitions, so the performance estimate is less sensitive to the partitioning of the data. In addition, in 10-fold cross validation process the one dataset is then split into 10 equal parts using a random number generator. Nine of those parts are put to use in training, while the remaining tenth is set aside for examination. We carry out this process a total of ten times, setting aside a different tenth of each iteration for evaluation each time.

3.7. Transfer Learning Model

Transfer learning employs a model that has already been trained to learn new, diverse data and to utilize the characteristics that have already been learned to address one problem as a springboard for solving other problems. In this work, we employed five pre-trained CNN architectures to predict 1000 classes: Alex Net, ResNet, VGG-16, MobileNet-v2, and SqueezNet. These architectures were trained using 1.2 M images. The labels of each object in the image are generated probabilistically by these networks using the full image as an input.
  • ResNet
    This model is related to Microsoft Research Center’s 50-layer Residual Network built in the research [60] ResNet employs shortcut connections to speed up training for improved service, which can decrease errors as complexity rises. Residual is linked to feature deduction. ResNet also addresses the issue of decreasing accuracy. Figure 5 depicts the ResNet model design.
  • Mobile Net-V2
    As illustrated in the study [61], Mobile Net-V2 model has two types of blocks. The first block is made up of a series of linear bottleneck processes, whereas the second is a skip connection. Batch normalization, convolution, and a modified RLU are all included in both blocks. mobile-V2 has a total of 16 blocks.
  • VGG-16
    Karen Simonyan and Andrew Zisserman of Oxford University’s Visual Geometry Group Lab proposed VGG 16 in the article “VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION” in 2014. In the 2014 ILSVRC competition [61], this model took first and second place in the aforementioned categories as shown in Figure 6 [62].
  • SqueezNet
    Squeeze Net is an 18-layer deep convolutional neural network. A pretrained variant of the network trained on over a thousand images of the ImageNet database may be loaded. As a consequence, the network has learnt detailed visual features for a diverse set of images. This method returns a Squeeze Net v1.1 network with similar accuracy as Squeeze Net v1.0 but fewer floating-point computations per prediction [63] as shown in Figure 7.
  • Alex Net
    In Alex Net, the network is divided into 11 different layers. The network has a significant number of layers, which makes feature extraction easier. In addition, the extensive number of factors has a negative influence on overall performance. The first layer that Alex Net has is called the convolution layer. The convolution layer is the third and last layer, coming after the maximum pooling and normalizing layers. The classification procedure comes to a close with the application of the SoftMax layer [64] as shown in Figure 8.

4. Result and Discussion

4.1. Experimental Setup

Table 3 presents the experimental setup of are as follows.

4.2. Evaluation Matrix

In order to evaluate the proposed model performance, the key metrices (Acc, Pres, Recall, Pres, F1-Score) is used. Which shows the true positive (TP), false negative (FP), True negative (TN), False positive (FP), as shown in Equations (3)–(7).
The following are the key performance indicators:
A c c ( % ) = T P + T N T P + T N + F P + F N
P r e c ( % ) = T P T P + F P
R e c a l l ( % ) = T P T P + F N
S p e c ( % ) = T N T N + F P
F 1 S c o r e ( % ) = 2 × ( Pres     Recall ) P r e s + R e c a l l
This section compares the performance metrics of the proposed system with transfer learning models (TL) such as VVG-16, SqeezNet, Mobile Net V2, ResNet Alex Net and in brain tumor detection and classification utilizing key performance metrics.
Accuracy (%), Pres (%), Recall (%) and F1-Score (%) One of the core aspects that exhibits the unique class efficiency in classification performance is accuracy.
Additionally, precision indicate the ratio of accuracy vs real time prediction and specificity presents the percentage of negative class. For the evolution of the proposed model the key performance indicators compared with other TL methods which shows the proposed model shows best classification performance in terms of Accuracy (99.20), Precsion (99.10%), specificity (98.2%), Recall (98.60%), and F1-Score (98%). Figure 9 shows that the SqueezNet model has the lowest performance metric.

4.3. Confusion Matrix

A confusion matrix is a performance assessment indicator that measures each class’s detection. In this investigation, the proposed deep Tumor network’s confusion matrix achieved good classification performance of binary tumor detection and properly classified each type of brain tumor. Figure 10 shows that the TLs has the lowest performance metric.

4.4. ROC Analysis

The receiver operating characteristic (ROC) curve is critical for evaluating brain tumor detection. The ROC curve depicts the ratio of TPR to FPR for each class detection performance. Figure 11 demonstrates that the proposed method outperforms than other TL techniques on the ROC curve basis.

4.5. TNR, TPR, and MCC Analysis

In this subsection represents the analysis of TNR, TPR, and MCC of the proposed model with best performer (Alex Net and Mobile Net) on Kaggle dataset. Figure 12 shows that the proposed model performed excellent values of TPR, TNR, and MCC as compared to another TL model.

4.6. Time Complexity (%)

The detection time is important factors or metrics indicated the effectiveness of the model, which shows the internal sustainability to find out the features and performed classification. The proposed model performed less detection time up to 3 ms as shown in Figure 13. Furthermore, the proposed model time complexity is expressed using the big O notation. The Big O notation (O(n^2) is the metric that is used most frequently to calculate time complexity where “n” represents the initial samples of the population. The term “worst-case scenario” is what “Big O” refers to precisely, and it may be used to represent either the amount of time needed for an algorithm’s execution or the amount of space it takes up.
The proposed model efficiency was tested by comparing it with other DL models using the same dataset. When compared to other models, the proposed methods have high classification performance.

4.7. Comparative Results with Existing ML/DL Model

We compared the proposed deep tumor network to other excellent benchmark algorithms such as LSTM, GRU, CNN, etc. Figure 14 shows the performance metrics accuracy to check the performance of the model. Although the Deep Tumor Network is expecting spectacular outcomes, all of these methods are being evaluated in terms of these parameters. Furthermore, the proposed have some limitation, needed high computing resources (good GPU) for the training process, which is high time complexity (ms).
When compared to other baseline methods from the existing literature, the performance of our proposed model in binary tumor classification is remarkable, as shown in Table 4.

5. Conclusions

This study proposed Google Net and CNN model hybrid models as defined as defined as deep tumor networks for BTs detection. The GoogleNet model was adopted as the foundation for the proposed model. The final five levels of GoogleNet were deleted replaced by 14 layers of CNN model, each one deeper than the prior one. Furthermore, the ReLU AFs were changed to a leaky Re-AF, although the basic CNN architecture remained unchanged. The total number of layers increased from 22 to 33 once the changes were implemented. The recommended hybrid model attained the highest classification accuracy of 99.10% achieved. In addition, to define the BT types, we used the Kaggle brain tumor dataset to train five deep CNN models implemented the TL technique. The results of these models were then compared to those of the proposed model. The outcomes of the investigations indicated that the proposed model was capable of distinguishing between brain tumors with greater accuracy. Furthermore, the proposed approach was capable of calculating more descriptive and discriminatory information, as well as precise features for brain tumor detection, resulting in a high degree of accuracy when compared to existing state-of-the-art techniques. Furthermore, the results of the studies show clearly that the CNN model that used transfer teaching methods offered the best potential performance level. In contrast to the other pre-trained models, the hybrid framework achieved the best level of accuracy.
Furthermore, in the following work, we will conduct experiments on the dataset using a limited number of MRI scans of the brain, including any malignant lesions and a significant number of normal scans, with the proposed model trying to extract information that was more comprehensive, discriminatory, and with precise features. As a result, before categorizing the Kaggle dataset of the brain into two groups (brain tumor and non-brain tumors), an effective segmentation technique for brain MRI data should be applied. Furthermore, we wish to assess the efficacy of the hybrid approach presented for application with different types of data in the areas of biomedical imaging, as COVID-19, lung disease, and asthma diagnosis.

Author Contributions

Conceptualization, G.A.A. and M.H.A.; data curation, G.A.A., M.S.A., A.O.A.B. and M.Y.A.; formal analysis, G.A.A., A.O.A.B., M.Y.A. and A.G.; funding acquisition, S.M.E.; investigation, G.A.A., A.O.A.B., A.A.H., M.Y.A., A.G. and S.M.E.; methodology, G.A.A.; project administration, G.A.A. and S.M.E.; resources, G.A.A., M.S.A., A.O.A.B. and M.H.A.; software, G.A.A., M.S.A., A.O.A.B. and A.A.H.; supervision, S.M.E.; validation, A.A.H. and M.Y.A.; visualization, G.A.A. and A.G.; writing—original draft, G.A.A.; writing—review and editing, G.A.A., M.S.A., A.A.H., M.H.A., A.G. and S.M.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used to support the findings of this study are available at https://www.kaggle.com/ahmedhamada0/brain-tumour-detection (accessed on 1 October 2022).

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large Groups (Project under grant number R.G.P.2/161/43).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bush, N.A.O.; Susan, M.C.; Mitchel, S.B. Current and future strategies for treatment of glioma. Neurosurg. Rev. 2017, 40, 1–14. [Google Scholar] [CrossRef] [PubMed]
  2. Narkhede Sachin, G.; Khairnar, V.; Kadu, S. Brain tumor detection based on mathematical analysis and symmetry information. Int. J. Eng. Res. Appl. 2014, 4, 231–235. [Google Scholar]
  3. Praveen, G.B.; Agrawal, A. Hybrid approach for brain tumor detection and classification in magnetic resonance images. In 2015 Communication, Control and Intelligent Systems (CCIS); IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  4. He, W.; Chen, F.; Dalm, B.; Kirby, P.A.; Greenlee, J.D. Metastatic involvement of the pituitary gland: A systematic review with pooled individual patient data analysis. Pituitary 2015, 18, 159–168. [Google Scholar] [CrossRef] [PubMed]
  5. DeAngelis, L.M. Brain tumors. N. Engl. J. Med. 2001, 344, 114–123. [Google Scholar] [CrossRef] [Green Version]
  6. Louis, D.N.; Perry, A.; Reifenberger, G.; Von Deimling, A.; Figarella-Branger, D.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016 World Health Organization classification of tumors of the central nervous system: A summary. Acta Neuropathol. 2016, 132, 803–820. [Google Scholar] [CrossRef] [Green Version]
  7. Roy, S.; Nag, S.; Maitra, I.K.; Bandyopadhyay, S.K. A review on automated brain tumor detection and segmentation from MRI of brain. arXiv 2013, arXiv:1312.6150. [Google Scholar]
  8. Tutsoy, O.; Barkana, D.E.; Balikci, K. A novel exploration-exploitation-based adaptive law for intelligent model-free control approaches. IEEE Trans. Cybern. 2021, 1–9. [Google Scholar] [CrossRef] [PubMed]
  9. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
  10. Wang, Y.; Zu, C.; Hu, G.; Luo, Y.; Ma, Z.; He, K.; Wu, X.; Zhou, J. Automatic tumor segmentation with deep convolutional neural networks for radiotherapy applications. Neural Process. Lett. 2018, 48, 1323–1334. [Google Scholar] [CrossRef]
  11. Visin, F.; Ciccone, M.; Romero, A.; Kastner, K.; Cho, K.; Bengio, Y.; Matteucci, M.; Courville, A. Reseg: A recurrent neural network-based model for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
  12. Mubashar, M.; Ali, H.; Grönlund, C.; Azmat, S. R2U++: A multiscale recurrent residual U-Net with dense skip connections for medical image segmentation. Neural Comput. Appl. 2022, 34, 17732–17739. [Google Scholar] [CrossRef]
  13. Raza, A.; Ayub, H.; Khan, J.A.; Ahmad, I.; SSalama, A.; Daradkeh, Y.I.; Javeed, D.; Ur Rehman, A.; Hamam, H. A Hybrid Deep Learning-Based Approach for Brain Tumor Classification. Electronics 2022, 11, 1146. [Google Scholar] [CrossRef]
  14. Ding, Y.; Zhang, C.; Lan, T.; Qin, Z.; Zhang, X.; Wang, W. Classification of Alzheimer’s disease based on the combination of morphometric feature and texture feature. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 9–12 November 2015; pp. 409–412. [Google Scholar]
  15. Ijaz, A.; Ullah, I.; Khan, W.U.; Ur Rehman, A.; Adrees, M.S.; Saleem, M.Q.; Cheikhrouhou, O.; Hamam, H.; Shafiq, M. Efficient algorithms for E-healthcare to solve multiobject fuse detection problem. J. Healthc. Eng. 2021, 2021, 9500304. [Google Scholar]
  16. Ahmad, I.; Liu, Y.; Javeed, D.; Ahmad, S. A decision-making technique for solving order allocation problem using a genetic algorithm. IOP Conf. Ser. Mater. Sci. Eng. 2020, 853, 012054. [Google Scholar] [CrossRef]
  17. Binaghi, E.; Omodei, M.; Pedoia, V.; Balbi, S.; Lattanzi, D.; Monti, E. Automatic segmentation of MR brain tumors images using support vector machine in combination with graph cut. In Proceedings of the 6th International Joint Conference on Computational Intelligence (IJCCI), Rome, Italy, 22–24 October 2014; pp. 152–157. [Google Scholar]
  18. Bahadure, N.B.; Ray, A.K.; Thethi, H.P. The comparative approach of MRI-based brain tumors segmentation and classification using genetic algorithm. J. Digit. Imaging 2018, 31, 477–489. [Google Scholar] [CrossRef]
  19. Zikic, D.; Glocker, B.; Konukoglu, E.; Criminisi, A.; Demiralp, C.; Shotton, J.; Thomas, O.M.; Das, T.; Jena, R.; Price, S.J. Forests for tissue-specific segmentation of high-grade gliomas in multi-channel MR. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Berlin/Heidelberg, Germany, 1 October 2012; pp. 369–376. [Google Scholar]
  20. Kaya, I.E.; Pehlivanlı, A.Ç.; Sekizkardeş, E.G.; Ibrikci, T. PCA based clustering for brain tumors segmentation of T1w MRI images. Comput. Methods Programs Biomed. 2017, 140, 19–28. [Google Scholar] [CrossRef]
  21. Nikam, R.D.; Lee, J.; Choi, W.; Banerjee, W.; Kwak, M.; Yadav, M.; Hwang, H. Ionic sieving through one-atom-thick 2D material enables analog nonvolatile memory for neuromorphic computing. Adv. Electron. Mater. 2021, 17, 2100142. [Google Scholar] [CrossRef]
  22. Zhang, J.P.; Li, Z.W.; Yang, J. A parallel SVM training algorithm on large-scale classification problems. In Proceedings of the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; pp. 1637–1641. [Google Scholar]
  23. Ye, F.; Yang, J. A deep neural network model for speaker identification. Appl. Sci. 2021, 11, 3603. [Google Scholar] [CrossRef]
  24. Ijaz, A.; Liu, Y.; Javeed, D.; Shamshad, N.; Sarwr, D.; Ahmad, S. A review of artificial intelligence techniques for selection & evaluation. IOP Conf. Ser. Mater. Sci. Eng. 2020, 853, 012055. [Google Scholar]
  25. Akbari, H.; Khalighinejad, B.; Herrero, J.L.; Mehta, A.D.; Mesgarani, N. Towards reconstructing intelligible speech from the human auditory cortex. Sci. Rep. 2019, 9, 874. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  27. Harish, P.; Baskar, S. MRI based detection and classification of brain tumor using enhanced faster R-CNN and Alex Net model. Mater. Today Proc. 2020, 7, 770–778. [Google Scholar] [CrossRef]
  28. Saleem, S.; Amin, J.; Sharif, M.; Anjum, M.A.; Iqbal, M.; Wang, S.H. A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models. Complex Intell. Syst. 2021, 7, 3105–3120. [Google Scholar] [CrossRef]
  29. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  30. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  31. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. [Google Scholar]
  32. Abiwinanda, N.; Hanif, M.; Hesaputra, S.T.; Handayani, A.; Mengko, T.R. Brain tumors classification using convolutional neural network. In World Congress on Medical Physics and Biomedical Engineering; Springer: Singapore, 2018; pp. 183–189. [Google Scholar]
  33. Ramzan, F.; Khan, M.U.G.; Rehmat, A.; Iqbal, S.; Saba, T.; Rehman, A.; Mehmood, Z. A deep learning approach for automated diagnosis and multi-class classification of Alzheimer’s disease stages using resting-state fMRI and residual neural networks. J. Med. Syst. 2020, 44, 37. [Google Scholar] [CrossRef]
  34. Anaraki, A.K.; Ayati, M.; Kazemi, F. Magnetic resonance imaging-based brain tumors grades classification and grading viaconvolutional neural networks and genetic algorithms. Biocybern. Biomed. Eng. 2019, 39, 63–74. [Google Scholar] [CrossRef]
  35. Noreen, N.; Palaniappan, S.; Qayyum, A.; Ahmad, I.; Imran, M.; Shoaib, M. A Deep Learning Model Based on Concatenation Approach for the Diagnosis of Brain Tumour. IEEE Access 2020, 8, 55135–55144. [Google Scholar] [CrossRef]
  36. Khan, H.A.; Jue, W.; Mushtaq, M.; Mushtaq, M.U. Brain tumour classification in MRI image using convolutional neural network. Math. Biosci. Eng. 2020, 17, 6203–6216. [Google Scholar] [CrossRef]
  37. Javeed, D.; Gao, T.; Khan, M.T.; Ahmad, I. A hybrid deep learning-driven SDN enabled mechanism for secure communication in Internet of Things (IoT). Sensors 2021, 21, 4884. [Google Scholar] [CrossRef]
  38. Ahmad, I.; Wang, X.; Zhu, M.; Wang, C.; Pi, Y.; Khan, J.A.; Khan, S.; Samuel, O.W.; Chen, S.; Li, G. EEG-based epileptic seizure detection via machine/deep learning approaches: A Systematic Review. Comput. Intell. Neurosci. 2022, 2022, 6486570. [Google Scholar] [CrossRef]
  39. Alanazi, A.K.; Alizadeh, S.M.; Nurgalieva, K.S.; Nesic, S.; Grimaldo Guerrero, J.W.; Abo-Dief, H.M.; Eftekhari-Zadeh, E.; Nazemi, E.; Narozhnyy, I.M. Application of Neural Network and Time-Domain Feature Extraction Techniques for Determining Volumetric Percentages and the Type of Two Phase Flow Regimes Independent of Scale Layer Thickness. Appl. Sci. 2022, 12, 1336. [Google Scholar] [CrossRef]
  40. Ahmad, S.; Ullah, T.; Ahmad, I.; Al-Sharabi, A.; Ullah, K.; Khan, R.A.; Rasheed, S.; Ullah, I.; Uddin, M.; Ali, M. A novel hybrid deep learning model for metastatic cancer detection. Comput. Intell. Neurosci. 2022, 2022, 8141530. [Google Scholar] [CrossRef]
  41. Ullah, N.; Khan, J.A.; Alharbi, L.A.; Raza, A.; Khan, W.; Ahmad, I. An Efficient Approach for Crops Pests Recognition and Classification Based on Novel DeepPestNet Deep Learning Model. IEEE Access 2022, 10, 73019–73032. [Google Scholar] [CrossRef]
  42. Wang, X.; Ahmad, I.; Javeed, D.; Zaidi, S.A.; Alotaibi, F.M.; Ghoneim, M.E.; Daradkeh, Y.I.; Asghar, J.; Eldin, E.T. Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics 2022, 11, 2767. [Google Scholar] [CrossRef]
  43. Wang, Y.; Taylan, O.; Alkabaa, A.S.; Ahmad, I.; Tag-Eldin, E.; Nazemi, E.; Balubaid, M.; Alqabbaa, H.S. An Optimization on the Neuronal Networks Based on the ADEX Biological Model in Terms of LUT-State Behaviors: Digital Design and Realization on FPGA Platforms. Biology 2022, 11, 1125. [Google Scholar] [CrossRef] [PubMed]
  44. Khan, M.A.; Ahmad, I.; Nordin, A.N.; Ahmed, A.E.S.; Mewada, H.; Daradkeh, Y.I.; Rasheed, S.; Eldin, E.T.; Shafiq, M. Smart Android Based Home Automation System Using Internet of Things (IoT). Sustainability 2022, 14, 10717. [Google Scholar] [CrossRef]
  45. Ullah, N.; Khan, J.A.; Almakdi, S.; Khan, M.S.; Alshehri, M.; Alboaneen, D.; Raza, A. A Novel CovidDetNet Deep Learning Model for Effective COVID-19 Infection Detection Using Chest Radiograph Images. Appl. Sci. 2022, 12, 6269. [Google Scholar] [CrossRef]
  46. Zeineldin, R.A.; Karar, M.E.; Coburger, J.; Wirtz, C.R.; Burgert, O. DeepSeg: Deep neural network framework for automatic brain tumor segmentation using magnetic resonance FLAIR images. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 909–920. [Google Scholar] [CrossRef]
  47. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumor classification for MR images using transfer learning and fine-tuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [Google Scholar] [CrossRef]
  48. Maqsood, S.; Damasevicius, R.; Shah, F.M. An efficient approach for the detection of brain tumor using fuzzy logic and U-NET CNN classification. In International Conference on Computational Science and Its Applications; Springer: Cham, Switzerland, 2021. [Google Scholar]
  49. Gurbina, M.; Lascu, M.; Lascu, D. Tumor detection and classification of MRI brain image using different wavelet transforms and support vector machines. In Proceedings of the 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 505–508. [Google Scholar]
  50. Rajinikanth, V.; Kadry, S.; Nam, Y. Convolutional-neural-network assisted segmentation and SVM classification of brain tumor in clinical MRI slices. Inf. Technol. Control 2021, 50, 342–356. [Google Scholar] [CrossRef]
  51. Amin, J.; Sharif, M.; Haldorai, A.; Yasmin, M.; Nayak, R.S. Brain tumour detection and classification using machine learning: A comprehensive survey. Complex Intell. Syst. 2021, 8, 3161–3183. [Google Scholar] [CrossRef]
  52. Badjie, B.; Ülker, E.D. A Deep Transfer Learning Based Architecture for Brain Tumor Classification Using MR Images. Inf. Technol. Control 2022, 51, 332–344. [Google Scholar] [CrossRef]
  53. Dvorák, P.; Menze, B. Local Structured prediction with convolutional neural networks for multimodal brain tumour segmentation. In MICCAI Multimodal Brain Tumour Segmentation Challenge (BraTS); Springer: Munich, Germany, 2015; pp. 13–24. Available online: http://people.csail.mit.edu/menze/papers/dvorak_15_cnnTumor.pdf (accessed on 10 October 2022).
  54. Irsheidat, S.; Duwairi, R. Brain Tumour Detection Using Artificial Convolutional Neural Networks. In Proceedings of the 2020 11th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 7–9 April 2020; pp. 197–203. [Google Scholar] [CrossRef]
  55. Sravya, V.; Malathi, S. Survey on Brain Tumour Detection using Machine Learning and Deep Learning. In Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 27–29 January 2021; pp. 1–3. [Google Scholar] [CrossRef]
  56. Kumar, S.; Mankame, D.P. Optimization drove deep convolution neural network for brain tumors classification. Biocybern. Biomed. Eng. 2020, 40, 1190–1204. [Google Scholar] [CrossRef]
  57. Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. Multi-modal brain tumor detection using deep neural network and multiclass SVM. Medicina 2022, 58, 1090. [Google Scholar] [CrossRef] [PubMed]
  58. Waghmare, V.K.; Kolekar, M.H. Brain tumors classification using deep learning. In The Internet of Things for Healthcare Technologies; Springer: Singapore, 2021; Volume 73, pp. 155–175. [Google Scholar]
  59. Kaggle Official Web Page. Available online: https://www.kaggle.com/datasets//brain-tumor-detection// (accessed on 5 April 2022).
  60. Towardsdatascience Official Web Page. Available online: https://towardsdatascience.com/Resnet// (accessed on 5 April 2022).
  61. Towardsdatascience Official Web Page. Available online: https://towardsdatascience.com/mobilenet// (accessed on 5 April 2022).
  62. Geeksforgeeks Official Web Page. Available online: https://www.geeksforgeeks.org/vgg-16-cnn-model// (accessed on 5 April 2022).
  63. Towardsdatascience Official Web Page. Available online: https://towardsdatascience.com/review/ (accessed on 5 April 2022).
  64. Towardsdatascience Official Web Page. Available online: https://towardsdatascience.com/AlexNet// (accessed on 5 April 2022).
Figure 1. The main block diagram of the proposed deep tumor network for BT classification.
Figure 1. The main block diagram of the proposed deep tumor network for BT classification.
Electronics 11 03457 g001
Figure 2. The flow diagram of the proposed work for BT classification.
Figure 2. The flow diagram of the proposed work for BT classification.
Electronics 11 03457 g002
Figure 3. MRI images of two categories: (a) with tumor (b) without tumor.
Figure 3. MRI images of two categories: (a) with tumor (b) without tumor.
Electronics 11 03457 g003
Figure 4. Presents the testing and training data of the brain tumor dataset.
Figure 4. Presents the testing and training data of the brain tumor dataset.
Electronics 11 03457 g004
Figure 5. Presents the basic block diagram of ResNet model.
Figure 5. Presents the basic block diagram of ResNet model.
Electronics 11 03457 g005
Figure 6. Presents the basic block diagram of VGG-16 model.
Figure 6. Presents the basic block diagram of VGG-16 model.
Electronics 11 03457 g006
Figure 7. Presents the basic block diagram of SqueezNet model.
Figure 7. Presents the basic block diagram of SqueezNet model.
Electronics 11 03457 g007
Figure 8. Presents the basic block diagram of Alex Net model.
Figure 8. Presents the basic block diagram of Alex Net model.
Electronics 11 03457 g008
Figure 9. The performance indicator (Acc, Recall, Pres, and F1-Score) of the proposed model with other TL model of BTs Classification.
Figure 9. The performance indicator (Acc, Recall, Pres, and F1-Score) of the proposed model with other TL model of BTs Classification.
Electronics 11 03457 g009
Figure 10. The confusion matrix of the proposed model with other TL model of BTs classification.
Figure 10. The confusion matrix of the proposed model with other TL model of BTs classification.
Electronics 11 03457 g010
Figure 11. The ROC of the proposed model with other TL model of BTs classification.
Figure 11. The ROC of the proposed model with other TL model of BTs classification.
Electronics 11 03457 g011
Figure 12. TNR, TPR, and MCC analysis.
Figure 12. TNR, TPR, and MCC analysis.
Electronics 11 03457 g012
Figure 13. Shows detection time of the proposed model with another recent best performer TL model.
Figure 13. Shows detection time of the proposed model with another recent best performer TL model.
Electronics 11 03457 g013
Figure 14. Comparative results of DL model with proposed model for BTs classification.
Figure 14. Comparative results of DL model with proposed model for BTs classification.
Electronics 11 03457 g014
Table 1. The brain dataset description.
Table 1. The brain dataset description.
Tumor ClassImagesPatientsTraining SamplesValidation SamplesTesting SamplesClass Labels
BT Tumor1050681050250250Yes (1)
No Tumor1050701050250250No (0)
Table 2. Presents the number of parameters of the proposed model in brain tumor classification.
Table 2. Presents the number of parameters of the proposed model in brain tumor classification.
No. LayerEpsilonNo of FilterFilter Size
Conv2D_layer0.0029401 × 1
Batch Norm layer0.001--
Clip ReLU layer
Group Conv2D layer 9403 × 3
Batch Norm layer
Clip ReLU layer0.002
Conv2D_layer 3001 × 1
Batch Norm layer0.002
Conv2D_layer 12601 × 1
Glob AVG Pool layer
FC layer
SoftMax
Class Layer
Table 3. Describe the experimental setup.
Table 3. Describe the experimental setup.
LibrariesKeras, Pandas, Tensor, NumPy,
CPUIntel, Cori7-processor
GPUNAVID, 32 GB
SoftwarePython 3.7
RAM16 GB
Table 4. Presents the comparative analysis of proposed model with recent ML/DL model.
Table 4. Presents the comparative analysis of proposed model with recent ML/DL model.
PublicationClassification TaskModelsAccuracy (%)Precision (%)Recall (%)F1-Score (%)
This workBinary classification taskProposed model99.198.998.698
[36]CNN91.690.889.989.5
[33]MobileNet-V294.993.692.892.6
[34]KNN, SVM 95.894.29493.6
[27]AlexNet93.692.69291.4
[35]VGG-1692.991.592190.9
[33]M-SVM95.89594.893
[34]ANN93.792.59190.50
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amran, G.A.; Alsharam, M.S.; Blajam, A.O.A.; Hasan, A.A.; Alfaifi, M.Y.; Amran, M.H.; Gumaei, A.; Eldin, S.M. Brain Tumor Classification and Detection Using Hybrid Deep Tumor Network. Electronics 2022, 11, 3457. https://doi.org/10.3390/electronics11213457

AMA Style

Amran GA, Alsharam MS, Blajam AOA, Hasan AA, Alfaifi MY, Amran MH, Gumaei A, Eldin SM. Brain Tumor Classification and Detection Using Hybrid Deep Tumor Network. Electronics. 2022; 11(21):3457. https://doi.org/10.3390/electronics11213457

Chicago/Turabian Style

Amran, Gehad Abdullah, Mohammed Shakeeb Alsharam, Abdullah Omar A. Blajam, Ali A. Hasan, Mohammad Y. Alfaifi, Mohammed H. Amran, Abdu Gumaei, and Sayed M. Eldin. 2022. "Brain Tumor Classification and Detection Using Hybrid Deep Tumor Network" Electronics 11, no. 21: 3457. https://doi.org/10.3390/electronics11213457

APA Style

Amran, G. A., Alsharam, M. S., Blajam, A. O. A., Hasan, A. A., Alfaifi, M. Y., Amran, M. H., Gumaei, A., & Eldin, S. M. (2022). Brain Tumor Classification and Detection Using Hybrid Deep Tumor Network. Electronics, 11(21), 3457. https://doi.org/10.3390/electronics11213457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop