Next Article in Journal
Dynamic Parameter Simulations for a Novel Small-Scale Power-to-Ammonia Concept
Next Article in Special Issue
AgriSecure: A Fog Computing-Based Security Framework for Agriculture 4.0 via Blockchain
Previous Article in Journal
Data-Driven Prediction of Li-Ion Battery Degradation Using Predicted Features
Previous Article in Special Issue
Developing Trusted IoT Healthcare Information-Based AI and Blockchain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Tumor in Brain MR Images Using Deep Convolutional Neural Network and Global Average Pooling

by
Prince Priya Malla
1,*,
Sudhakar Sahu
1 and
Ahmed I. Alutaibi
2,*
1
School of Electronics Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar 751024, India
2
College of Computer and Information Sciences, Majmaah University, Majmaah 11952, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Processes 2023, 11(3), 679; https://doi.org/10.3390/pr11030679
Submission received: 13 January 2023 / Revised: 11 February 2023 / Accepted: 13 February 2023 / Published: 23 February 2023

Abstract

:
Brain tumors can cause serious health complications and lead to death if not detected accurately. Therefore, early-stage detection of brain tumors and accurate classification of types of brain tumors play a major role in diagnosis. Recently, deep convolutional neural network (DCNN) based approaches using brain magnetic resonance imaging (MRI) images have shown excellent performance in detection and classification tasks. However, the accuracy of DCNN architectures depends on the training of data samples since it requires more precise data for better output. Thus, we propose a transfer learning-based DCNN framework to classify brain tumors for example meningioma tumors, glioma tumors, and pituitary tumors. We use a pre-trained DCNN architecture VGGNet which is previously trained on huge datasets and used to transfer its learning parameters to the target dataset. Also, we employ transfer learning aspects such as fine-tune the convolutional network and freeze the layers of the convolutional network for better performance. Further, this proposed approach uses a Global Average Pooling (GAP) layer at the output to avoid overfitting issues and vanishing gradient problems. The proposed architecture is assessed and compared with competing deep learning based brain tumor classification approaches on the Figshare dataset. Our proposed approach produces 98.93% testing accuracy and outperforms the contemporary learning-based approaches.

1. Introduction

Recently, computer vision-based medical imaging techniques help medical experts for better diagnosis and treatment [1]. A number of medical imaging modalities for example X-ray, computer tomography (CT), MRI, and ultrasound have shown remarkable achievements in the health care system [2]. These medical imaging techniques have been utilized for brain imaging analysis, diagnosis, and treatment. The detection and classification of brain tumors have emerged as a hot research topic for researchers, radiologists, and medical experts [3].
Brain tumors arise due to the unusual growths and unrestrained cell division in the brain. It can deteriorate the health condition of a patient and expedite casualty if not detected precisely [4]. Generally, brain tumor is grouped into two varieties, for example malignant tumor and benign tumor. A malignant tumor is regarded a cancerous tumor and a benign tumor is considered a noncancerous tumor. The objective of tumor detection is to identify the position and extension of the tumor area. This detection task can be accomplished by comparing the abnormal areas with the normal tissue [5]. Accurate imaging analysis of brain tumor images can determine a patient’s condition. MRI is an extensively used imaging technique for the study of brain tumors. The brain MR images provide a clear representation of the brain structure and abnormalities [6]. It is observed that for brain tumor detection, two important imaging modalities such as CT scan and MRI are used. However, as compared to CT scans, MRI is preferred due to its non-invasive nature which produces high-resolution images of brain tumors. Usually, brain MRI can be modeled into four modes such as T1-weighted, T1-weighted contrast-enhanced, T2-weighted, and T2-weighted FLAIR. Each model illustrates different features of a brain tumor [7]. In the literature, various automated approaches have been introduced for brain tumor classification utilizing brain MRI. Over the years, support vector machine (SVM) and neural network (NN) based approaches are extensively utilized for brain tumor classification [8,9,10,11]. Konur et al. [12] proposed SVM based approach where the SVM model is first trained with known samples and then, the trained model is used to process other brain tumor images. Xiao et al. [13] developed a segmentation technique by merging both Fuzzy C-Means and SVM algorithms.
Earlier, machine learning (ML) based tumor detection approaches are considered state-of-the-art techniques. Recently, these ML-based approaches are unable to provide high accuracy results due to inefficient prediction models and the acute features of the medical data. Therefore, most of the researchers tried to find an alternative learning-based approach for the improvement of the classification accuracy [14,15]. Alternatively, deep learning (DL) sets a sensational development in the machine learning domain since DL architectures can efficiently predict the model by using a large dataset. Unlike SVM and KNN, the deep learning models are able to signify complex relationships deprived of using a large number of nodes. Therefore, these approaches have obtained brilliant performance in medical imaging applications [16]. Recently many researchers have developed computer-aided frameworks for medical image classification tasks that produce outstanding results. Yu et al. [17] introduced a computer-aided electroencephalogram (EEG) classification framework named CABLES that classifies the six different EEG domains under a unified sequential frame. The authors have conducted comprehensive experiments on seven different types of datasets by using a 10-fold cross-validation scheme. The proposed EEG signal classification framework has shown significant improvements over the domain-specific approaches in terms of classification accuracies. Sadiq et al. [18] developed an innovative pre-trained CNN based automated brain-computer interface (BCI) framework for EEG signal identifications. This framework is basically investigating the consequences of various limiting factors. The proposed approach has been assessed using three public datasets. The experimental results witnessed the robustness of the proposed BCI framework when identifying EEG signals. Huang et al. [19] urther developed a deep learning based EEG segregation pipeline to overcome the previous limitations present in the BCI framework. In this approach, the authors have merged the concepts of multiscale principal component analysis, Hilbert transform based signal resolution approach, and pre-trained CNNs for the automatic feature estimation and segregation. The proposed BCI framework has been evaluated using three binary class datasets. It is found the proposed approach is reliable in identifying EEG signals and has shown outstanding performance in terms of classification accuracy.
The traditional diagnostic approach such as histopathology is the process of detecting the disease with the help of microscopic investigation of a biopsy which is exposed onto a glass slide. The traditional diagnostic approaches are performed manually from tissue samples by pathologists. The area of infected area However, these traditional diagnostic approaches are time consuming and difficult. On the other hand, transfer learning-based DCNN framework reduces the workload of the pathologist and supports them to concentrate on vulnerable cases. Moreover, the use of transfer learning can help to process the brain MRl images faster and more accurately. Further, the automatic detection and classification will lead to a quicker diagnosis procedure which is less labor-intensive.
The DCNN architectures have shown outstanding performance in detecting and classifying brain tumors because of their generalizations of different levels of features. Also, the pre-processing steps such as data augmentation and stain normalization used in DCNN are beneficial to obtain robust and accurate performance. Therefore, we are motivated to use DCNN architecture to detect and classify brain tumors. However, the accurateness of DCNN architectures depends on the data sample and the training process since these architectures require more precise data for better output. In order to overcome this limitation, transfer learning can be employed for improved performance. Mainly, transfer learning has two main aspects such as fine-tune the convolutional network and freeze the layers of the convolutional network. Instead of building a CNN model from scratch, fine-tuning the pre-trained model will be sufficient for the classification task.
Therefore, we use a pre-trained DCNN architecture called as VGGNet based on transfer learning to classify brain tumors for instance meningioma tumor, glioma tumor, and pituitary tumor. Usually, the pre-trained architecture is previously trained on a large dataset and used to transfer its learning parameters to the target dataset. Therefore, the pre-trained model can consume less time since it does not require a large dataset to get the results. The top layers of the VGGNet model extract the low-level features for example colors and edges whereas, the bottom layers extract the high-level features for example objects and contours. The objective is to transfer the knowledge learned by VGGNet to a different target task of classifying brain tumor MRI images. The main objective of using VGGNet over other pre-trained networks is the use of small receptive fields in VGGNet rather than massive fields. Due to the use of smaller convolutional filters in the VGGNet, it contains a significant number of weight layers and in turn, it provides better performance. Further, this proposed approach uses a Global Average Pooling (GAP) layer at the output to avoid overfitting issues and vanishing gradient problems. The GAP layer is used to transform the multidimensional feature map into a one-dimensional feature vector [20,21,22]. Since the GAP layer does not need parameter optimization, therefore, the overfitting issue can be escaped at this layer. The major contributions of the proposed research work are listed as follows
  • To develop an approach for automatic detection and classification of brain tumors using transfer learning-based DCNN architecture. The proposed approach is proficient in extracting important features from the Figshare dataset.
  • Data augmentation technique is used to artificially increase the size of training image data by rotating and flipping the original dataset. More training image data will help the CNN architecture to boost performance and produce skillful models.
  • GAP layer is used at the output to avoid overfitting issues and vanishing gradient problems.
  • To compare the proposed framework with competing brain tumor classification approaches with reference to accuracy on the Figshare dataset.
The remainder of the research article is structured as follows: Section 2 presents the comparative analysis of related work done. Section 3 provides a brief outline of DCNN, transfer learning and the pre-trained model VGGNet. Section 4 illustrates the proposed brain tumor detection approach. Section 5 presents a thorough description about the dataset used in the experiment, evaluation metrics, the training process, results and discussion. Finally, Section 6 provides the conclusion and possible future prospect of this work.

2. Related Work

Brain tumor detection and classification problem have been evolved as a hot research topic for two decades because of their high medical relevance. Timely detection, diagnosis, and classification of brain tumors have been instrumental in effective treatment planning for the recovery and life extension of the patient. Brain tumor detection is a procedure to differentiate the abnormal tissues for example active tumor tissue, edema tissue from normal tissues for example gray matter, white matter. Generally, the brain tumor detection process is grouped into three types such as manual detection, semi-automatic detection, and fully automatic detection. Currently, medical experts are giving more importance to fully automatic detection methods where the tumor location and area can be detected automatically deprived of human intervention by setting appropriate parameters.
The deep learning model extends the conventional neural networks with the addition of more hidden layers among the input layer and output layer of the network in order to establish additional complex and nonlinear relations. A number of deep learning models for instance convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN) are extensively employed for medical imaging applications. Here, we summarize numerous the deep learning-based existing work done for the brain tumor classification task.
Havaei et al. [23] introduced an automated DNN based brain tumor segmentation technique. This method accomplishes both local and global contextual features of the brain at one time. The fully connected (FC) layer used at the last layer of the network improves the network speed by 40 fold. This proposed model is applied specifically for the segmentation of glioblastomas tumors pictured in brain MRI. Rehman et al. [24] introduced three different types of CNN-based architecture for example AlexNet, GoogLeNet, and VGGNet for the classification of brain tumors. This framework attained an accuracy of 98.69% by utilizing the VGG16 network.
Instead of extracting features from the bottom layers of the pre-trained network, Noreen et al. [25] introduced an efficient framework where, the features are extracted from multiple levels and then, concatenated to diagnose the brain tumor. Initially, the features are extracted from DensNet201 and then, concatenated. At last, the concatenated features are provided as input to the softmax classifier for the classification. Similar steps are applied for the pre-trained Inceptionv3 model. The performances of both models are assessed and validated using a three-class brain tumor dataset. The proposed framework achieved accuracies of 99.34%, and with Inception-v3 model and DensNet201 model respectively in terms of classification of brain tumor.
Li et al. [26] developed a multi-CNN structure by combining multimodal information fusion and CNN to detect brain tumors. The authors have extended the 2D-CNNs to multimodal 3D-CNNs in order to get different information among the modalities. Also, a superior weight loss function is introduced to minimize the interference of the non-focal area which in turn increases the accuracy of detection. Sajjad et al. [27] developed a CNN-based multi-grade classification system which helped in clear segmentation of tumor region from the dataset. In this system, first, a deep learning technique is utilized to segment tumor areas from brain MRI. Subsequently, the proposed model is trained effectively to avoid the deficiency of data problems to deal with MR images. Finally, the trained network is fine-tuned using augmented data to classify brain tumors. This method achieves an accuracy of 90.67% for enhancing the classification of tumors into different grades.
Anaraki et al. [1] proposed a tumor classification approach by taking advantage of both CNN and the genetic algorithm (GA). Instead of adopting a pre-defined deep neural network model, this proposed approach uses GA for the development of CNN architecture. The proposed approach attained an accuracy of 90.9% for classifying three Glioma grades. Zhou et al. [28] presented a universal methodology based on DenseNet and RNN to detect numerous types of brain tumors on brain MRI. The proposed methodology can successfully handle the variations of location, shape, and size of tumors. In this method, first DenseNet is applied for the extraction of features from the 2D slices. Then, the RNN is used for the classification of obtained sequential features. The effectiveness of this approach is evaluated on public and proprietary datasets and attained an accuracy of 92.13%.
Afshar et al. [29] proposed a new learning-based architecture called capsule networks (CapsNets) for the detection of brain tumors. It is perceived that CNN needs enormous amounts of data for training. The introduction of CapsNets can overcome the training complexity of CNN as it requires fewer amounts of data for training. This approach incorporates CapsNets for brain tumor classification and achieves better performance than CNNs. Deepak et al. [30] recommended a transfer learning-based tumor classification system utilizing a pre-trained GoogLeNet to categorize three prominent tumors seen in the brain such as glioma, meningioma, and pituitary. This method effectively classifies tumors into different grades with a classification accuracy of 98%. Frid-Adar et al. [31] introduced a generative adversarial network for the task of medical image classification to face the challenges of the unavailability of medical image datasets. The proposed approach generates synthetic medical images which are utilized to improve the performance. This approach is validated on a limited liver lesions CT image dataset and has shown a superior classification performance of 85.7% sensitivity and 92.4% specificity. Abdullah et al. [32] introduced a robust transfer learning enabled lightweight network for the segmentation of brain tumors based on the VGG pre-trained network. The efficacy of the proposed approach has been evaluated on the BRATS2015 dataset. This framework attained a global accuracy of 98.11%.

3. Deep Convolutional Neural Network

CNN’s are feed-forward neural networks that rely on features such as receptive field, weight sharing, and pooling operation in order to characterize the image data [33]. Generally, CNN involves three key layers for example convolutional layer, pooling layer, and fully connected (FC) layer. In convolutional layers, the convolution operation between various kernels and the input image is performed to obtain feature maps. The convolution layers reduce drastically the total network parameters due to the concept of weight sharing and identify various patterns using the receptive field. The pooling layer pools features via a window of a particular size. The pooling layer minimizes the size of the feature map and parameters used in CNN. It helps to reduce the computational cost. Max-Pooling is the widely used pooling technique for designing CNN where it considers the maximum value of input taken by the pooling window. The FC layer acts as a classifier. The features propagated in a forward direction through the network to the FC layer. Finally, the back-propagation procedure is utilized to update the network parameters with the help of gradient descent.
Each convolutional layer extracts features from the previous layer. The feature map can be calculated as
F s = b s + r W s r X r
where, X r represents the rth input channel and W s r represents the corresponding sub-kernel, b s is a bias term and signifies the convolution operation. Thus, the feature can be commutated by taking summation of the application of R dissimilar N × N convolution filters and a bias term. To get the non-linear features, some of the non-linear functions for example sigmoid, rectified linear unit have been applied to the convolution result. Recently, a max-out non-linearity has been used effectively in exhibiting valuable features. The max-out features related with K feature maps for spatial position i, j as follow
Z s , i , j = max F s , i , j , F s + 1 , i , j , ..... F s + K , i , j
Max pooling operation determines maximum feature value in each feature map. It minimizes the size of feature map which depends on the pooling size p . It is characterized as
H s , i , j = M a x P Z s , i + p , j + p
In the case of medical image processing, the available data to train a DCNN model is limited. This results in over-fitting which lowers the performance of the DCNN model. To solve this issue, the concept of transfer learning is introduced [34]. Transfer learning is nothing but a part of deep learning that is based on the fact that a pre-trained model is trained on a large dataset and transfers its learning parameters to the small dataset usually the target dataset. In order to use the pre-trained model for a different task, the last FC layers are trained with preliminary arbitrary weights on the target dataset. Therefore, the transfer of the pre-trained network parameters can deliver a new target model with effective feature extraction proficiency and less computational cost. Transfer learning has shown superior performance in medical imaging applications with reference to accuracy, training time, and error rates. Here, we use a popular pre-trained network, VGG16 for brain tumor classification tasks.
VGG16 is a DCNN pre-trained network introduced by Simonyan et al. in 2014 [35]. It involves 16 layers which contain convolutional layers, 3 FC layers and 5 max-pooling layers. The input image fed to the first convolutional layer is having image size of 224 × 224. Rectified linear unit (ReLU) is utilized as the activation function subsequently every convolutional layer. In addition, a max pooling layer is used in VGG16 network to minimize the size of the network. To prevent over-fitting, dropout regularization is used in the FC layers. Finally, softmax linear layer is used after the last FC layer for the classification of the given image. VGG16 network replaces multiple 3 × 3 filters in a sequential manner. The use of multiple stacked small size kernel is more effective as compared to a large size kernel and it improves the depth of ConvNet. Therefore, the ability of the network to learn hidden features increases with increase in depth of ConvNet.

4. Proposed Transfer Learning Based Brain Tumor Classification Framework

The proposed transfer learning-based brain tumor classification framework is demonstrated in Figure 1. The proposed approach comprises three main stages including preprocessing, feature extraction, and classification. In the pre-processing step, we use the contrast stretching technique to enhance the brain MRI images. To reduce overfitting, the data augmentation technique is utilized to artificially enhance the size of training image data by rotating and flipping the original dataset. More training image data will help the CNN architecture to boost performance and produce skillful models. In the next step, a pre-trained DCNN architecture VGG16 is applied on a target brain MRI dataset to extract distinctive features from brain MRI images. Also, two important strategies of transfer learning for instance fine-tuning of VGG16 architecture and freezing layers of VGG16 architecture are employed. Further, this proposed approach uses a GAP layer at the output to avoid overfitting issues and vanishing gradient problems. Finally, the distinctive features are classified using a log-softmax layer. The steps used in this proposed framework are explained in detail as follows.

4.1. Data Pre-Processing

The fundamental task of the pre-processing step is to enhance the quality of the brain MR images and to improve the contrast. Generally, the brain MR images are acquired from various modalities which reduce the intensity level and produce artifacts. Therefore, contrast enhancement of brain MR images is important before further processing through human or computer vision systems [36]. Furthermore, preprocessing step can help to increase the signal-to-noise ratio, remove background noise, to preserve the edges of the MR images. Therefore, we use the contrast stretching algorithm in the pre-processing to produce high-resolution contrast MR images. The deep neural network models can enhance their learning capability and predict more accurate output with a large training dataset [37]. We utilize data augmentation techniques in order to get large training data from the original dataset. This helps in reducing the over-fitting effect that arises during training. Basically, rotations and flipping are used as data augmentation techniques for the generation of the large training dataset. Each image in the original dataset can be rotated with an angle of 90, 180, and 270 degrees. Similarly, the image can be flipped vertically and horizontally. In this way, a large training dataset can be generated and given as input to the pre-trained DCNN architecture.

4.2. DCNN Based Feature Extraction

The discriminative features extracted from the large training dataset are fed into the DCNN architecture in order to characterize the properties of brain MRI images. In this proposed work, we employ a pre-trained DCNN architecture called the VGG16 network. Also, we use transfer learning techniques to improve classification accuracy and accelerate the learning process. VGG16 is trained on the ImageNet dataset that comprises 1.2 million images of 1000 types of classes. The pre-trained weights will be used for further training of VGG16 for brain tumor classification problems when data availability is limited. This process will help in minimizing the over-fitting effect. The discriminative feature extraction can be done by means of two aspects of transfer learning such as fine-tuning and freezing. Fine-tuning replaces the last layers of the VGG16 model. Instead of replacing the whole DCNN architecture, the weights of the network are modified from the top of the pre-trained model. The last FC layers are trained using preliminary weights on the Figshare dataset (target dataset) with a smaller learning rate and the other layers are kept stationary by freezing their learning parameters. The learning rate of all blocks is made zero except the last layer. Even though the Figshare dataset is different from the ImageNet dataset, the low-level features are analogous. Therefore, the procedure of transferring learning parameters enhances the feature extraction capability and minimizes computational cost.

4.3. Classification of Brain Tumors

This proposed approach uses a Global Average Pooling (GAP) layer at the output as an alternative to an FC layer. The GAP layer is used as a smooth layer that transforms the multi-dimension feature map into a one-dimension feature vector. If the size of the image is N × N then, the GAP layer transforms the ( M × N × N ) feature map to the ( 1 × N ) feature map. Consequently, it accomplishes a linear transformation of vectorized feature maps which can be inferred as groups of confidence maps. Henceforth, the GAP layer is more natural to the convolution structure with the enforcement of communications among feature maps and categories. Since the GAP layer does not need parameter optimization, therefore, the overfitting issues and vanishing gradient problem can be resolved at this layer. The extracted features from the VGG16 network are processed through the softmax classifier. First, the discriminative features from the pre-trained DCNN model are fine-tuned to the Figshare dataset. Then, the softmax layer classifies the brain tumors into three classes by regulating the number of neurons. The detailed process of the proposed transfer learning-based brain tumor classification framework is summarized in Algorithm 1.
Algorithm 1: Brain Tumor Classification using Transfer Learning
Input: Brain MRI Dataset (Figshare), Learning rate: 0.0001, Batch size: 20, Epochs: 100
Step 1: Apply Pre-processing step to enhance the quality of the brain MR images.
Step 2: Apply data augmentation techniques to get large training data.
repeat
for epoch 1 to N do
Step 3: Set pre-trained DCNN layer functions and filters
Step 4: Train the Brain MRI Dataset as a sample knowledge base
Step 5: Extract the discriminative features from Brain MR images and into the DCNN architecture
Step 6: Compute the outcomes of DCNN architecture
Step 7: Apply the GAP layer to transform the (MxMxN) feature map to the (1xN) feature map.
Step 8: Compute MSE loss function
Step 9: Redefine the DCNN layers to improve classification accuracy
Step 10: Do for all MR images
end for
Until converge
Output: MR Slices

5. Experiments and Discussion

We assess the efficiency of the proposed transfer learning-based framework on the widely used Figshare dataset. Here, we illustrate a detailed description of the datasets, evaluation metrics, network training, and performance evaluation.

5.1. Datasets

We use a publicly available brain MRI dataset called Figshare developed by Cheng et al. [38]. The dataset contains 3064 T1-weighted brain MRI slices of three different categories of tumor for example meningioma, glioma, and pituitary obtained from 233 patients. The dataset comprises 708 meningioma slices, 1426 glioma slices, and 930 pituitary slices. All images are stored in .mat format. The MRI slices are normalized with an intensity range between 0 to 1. Figure 2 shows three categories of brain tumors from the Figshare dataset. Figure 3 demonstrates the images obtained after the application of the data augmentation technique.

5.2. Evaluation Metrics

To assess the effectiveness of the proposed model for detection and classification task of brain tumor, we use four performance measures for instance accuracy, sensitivity, precision, and specificity.
Accuracy: It is a performance metric which determines the percentage of appropriately classified image samples among total number of image samples without considering the image class labels. It can be calculated using following relation
A c c u r a c y = T P + T N T P + T N + F P + F N
Sensitivity: It is a performance metric which determines the ability of the model to properly classify brain tumors. It can be calculated using following relation
S e n s i t i v i t y = T P T P + F N
Specificity: It is a performance metric which determines the ability of the model to accurately classify negative samples. It can be calculated using following relation
S p e c i f i c i t y = T N T N + F P
Precision: It is a performance metric which determines the true positive measure. It can be calculated using following relation
P r e c i s i o n = T P T P + F P
The notations used in the above equations are represented as; TP: true positive, TN: true negative, FP: false positive and FN: false negative.

5.3. Network Training

Here, we explain the detailed architecture and training procedure of the proposed transfer learning-based framework. The architecture of our proposed DCNN framework is illustrated in Figure 4. Here, the convolution operation is accomplished using 3 × 3 convolution filters with zero paddings. Similarly, the pooling operation is performed using max-pooling of size 2 × 2. The MRI slices are resized into 224 × 224 pixels using interpolation. The values of the augmentation parameters used in the process of data augmentation are set as vertical flip = 0.5, horizontal flip = 0.5, random brightness contrast = 0.3, and shift scale rotate = 0.5. Each MRI slice is normalized with the subtraction of the mean image calculated from the training set. We use FC layers trained with the ReLU activation function. The dropout rate used to avoid overfitting is 0.2. Adam optimizer is utilized for the optimization of the network having a learning rate of 0.001. The parameters β 1 and β 2 are set to 0.6 and 0.8, correspondingly. We use 100 epochs with a batch size of 20 to train the DCNN model. We use 70 percentage of MRI dataset for training, 15 percentage of the dataset for validation, and the rest 15 percentage of the dataset for testing. The hyper parameters used in our proposed methodology are illustrated in Table 1.

5.4. Results and Discussion

In the first part of the experiment, we assess the performance metrics of the proposed transfer learning-based DCNN model on the Figshare dataset with regards to accuracy, sensitivity, precision, and specificity. Table 2 shows the performance measures of the proposed transfer learning-based classification model. It is noticed that the proposed classification model has shown the highest percentage for four important classification measures when evaluated on the Figshare dataset. As seen, our classification framework reached an accuracy, sensitivity, specificity, and precision of 98.93%, 98.68%, 99.13%, and 99.11%, respectively. Also, we present the accuracy of each class of tumors as well as mean accuracy in Table 3. In our experiment, we have used 106 brain MRI slices having Meningioma tumor, 214 brain MRI slices having Glioma tumor, 139 brain MRI slices having Pituitary tumor for testing the proposed classification framework. As observed from Table 2, we obtain the classification accuracy for Meningioma 97.88%, for Glioma 99.29%, and Pituitary 98.38%. Finally, we have taken the mean accuracy of three classes of tumors, and it resulted as 98.51%. Table 4 illustrates the MSE loss after each convolutional layer of the proposed transfer learning-based DCNN model being trained. In addition, we present the sample experimental results obtained by the proposed framework in Figure 5. Figure 4 shows the original brain MR image with tumor image, white matter, segmented Image, gray matter, skull-stripped image, and extracted tumor respectively. The confusion matrix of the proposed transfer learning-based DCNN model is illustrated in Figure 6.
Usually, feature maps assist in determining the active areas of brain MR images which play a significant role in the tumor classification procedure. The outer layers of the network emphasize granular features such as the shape of the brain and locations of tumors. The granularity of the features minimizes as we proceed through the layers. The last layer produces fine granular features which can find small tumors. At last, the generated feature maps are employed to classify the brain MRI using a Sigmoid activation function. The receiver operator characteristics (ROC) curve and precision-recall (PR) curve of the proposed classification methodology is shown in Figure 7. The training progress of the proposed transfer learning-based framework is presented in Figure 8. It shows the variation of accuracy percentage versus the number of epochs. It is perceived that with increase in epochs, the model attains the minimum mean square error and converges. Finally, our proposed transfer learning-based DCNN model attains an accuracy of 98.9% on the Figshare dataset.
In the second part of the experiment, the competency of the proposed transfer learning-based DCNN framework is compared with existing brain tumor classification techniques on the Figshare dataset. We consider some of the existing methods introduced by researcher Ismael et al. [3], Abiwinanda et al. [7], Swati et al. [16], Afshar et al. [29], and Pashaei et al. [39], for the comparison. Ismael et al. [3] combined both statistical features and NN algorithms for the brain tumor classification task and attained an accuracy of 91.9% on the Figshare dataset. Abiwinanda et al. [7] developed a CNN-based architecture with max-pooling for brain tumor classification and attained an accuracy of 84.19%. Afshar et al. [29] use CapsNets based model for the detection of brain tumors. It overcomes the training complexity of CNN and achieves a classification accuracy of 86.56%. Pashaei et al. [39] use CNN for the purpose of extraction of hidden features from brain MRI and then, ELMs are employed to classify extracted features. They obtained the highest classification accuracy of 93.68%. Jun et al. [40] introduced a unique attention based mechanism by integrating the multipath network for the task of brain tumor classification. The primary objective of using the attention mechanism is to choose only the acute information fit into the target area whereas ignoring the irrelevant details. In addition, the multipath network is utilized to reduce the complexity. This scheme has been evaluated with the Figshare dataset and attained an accuracy of 98.61%. Masood et al. [37] develop an efficient custom Mask Region-based CNN scheme for tumor classification and segmentation in adverse conditions for example noisy input MR images, asymmetrical shapes, and indistinct boundaries. This scheme has been evaluated with the Figshare dataset and attained an accuracy of 98.34%. The performance results with regards to the accuracy of different competing approaches are showed in Table 5. It is perceived that the proposed transfer learning-based DCNN model has shown superior accuracy results compared to other existing approaches.
It is observed from our research work that different brain tumors have different shapes and positions. For example, a glioma tumor is typically encircled by edema. On the other hand, pituitary tumor exists near the sphenoidal sinus and optic chiasma. Meningioma normally occurs very close to the skull, and cerebrospinal fluid. Thus, it is very problematic to classify the discriminative features of a particular brain tumor. Also, the discriminative features are correlated to the position of the brain tumor. Most of the deep learning-based approaches are unable to classify features due to the unavailability of data. However, our proposed transfer learning-based tumor detection and classification approach require fewer amount data for learning since the pre-trained model is previously trained via a large amount of data. Therefore, the proposed approach is capable of learning and classifying different types of features. It provides accurate classification results for three categories of tumors.
In addition, we have done a comparative analysis of the results obtained by the proposed transfer learning based DCNN model on the Figshare dataset with and without the data pre-processing step as shown in Table 6. The objective is to show the effectiveness of the data pre-processing step used in our proposed approach. It is observed that the proposed approach has shown better results in terms of accuracy, precision, specificity, and precision with the data pre-processing step as compared to without pre-processing step.

6. Conclusions

In this research work, we use a pre-trained DCNN architecture VGG16 based on transfer learning for the classification of brain tumors for example meningioma tumor, glioma tumor, and pituitary tumor. This proposed work has overcome the limitation in the training of data samples of DCNN architectures with the use of transfer learning and provides more accurate classification results. Due to the use of the GAP layer at the output, the proposed framework avoids overfitting issues and vanishing gradient problems. Our proposed approach attains classification accuracy of 98.93% and outperforms the state-of-the-art learning-based approaches on the Figshare dataset. This proposed research work can support medical experts to make more accurate decisions regarding the type of brain tumors and consequently, it may reduce the diagnosis error. Although the proposed transfer learning based DCNN framework has shown outstanding results, still certain improvements can be possible. In the future, a larger dataset can be used for training purposes. Further, the issues of feature dimensionality that arise at the time of transferring weights and parameters can be addressed.

Author Contributions

Conceptualization, P.P.M. and S.S.; methodology, P.P.M., and S.S.; software, P.P.M., S.S. and A.I.A.; validation, P.P.M., S.S. and A.I.A.; formal analysis, P.P.M., S.S. and A.I.A.; writing—original draft preparation, P.P.M.; writing—review and editing, P.P.M., S.S. and A.I.A. All authors have read and agreed to the published version of the manuscript.

Funding

The author would like to thank Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No. R-2023-9.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be available on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kujur, A.; Raza, Z.; Khan, A.A.; Wechtaisong, C. Data Complexity Based Evaluation of the Model Dependence of Brain MRI Images for Classification of Brain Tumor and Alzheimer’s Disease. IEEE Access 2022, 10, 112117–112133. [Google Scholar] [CrossRef]
  2. Zhu, Z.; He, X.; Qi, G.; Li, Y.; Cong, B.; Liu, Y. Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI. Inf. Fusion 2023, 91, 376–387. [Google Scholar] [CrossRef]
  3. Ismael, M.R.; Abdel-Qader, I. Brain Tumor Classification via Statistical Features and Back-Propagation Neural Network. In Proceedings of the 2018 IEEE International Conference on Electro/Information Technology (EIT), Rochester, MI, USA, 3–5 May 2018; pp. 0252–0257. [Google Scholar]
  4. Nawaz, S.A.; Khan, D.M.; Qadri, S. Brain Tumor Classification Based on Hybrid Optimized Multi-features Analysis Using Magnetic Resonance Imaging Dataset. Appl. Artif. Intell. 2022, 36, 2031824. [Google Scholar] [CrossRef]
  5. Fang, L.; Wang, X. Brain tumor segmentation based on the dual-path network of multi-modal MRI images. Pattern Recognit. 2022, 124, 108434. [Google Scholar] [CrossRef]
  6. Toğaçar, M.; Ergen, B.; Cömert, Z. Tumor type detection in brain MR images of the deep model developed using hypercolumn technique, attention modules, and residual blocks. Med. Biol. Eng. Comput. 2021, 59, 57–70. [Google Scholar] [CrossRef] [PubMed]
  7. Abiwinanda, N.; Hanif, M.; Hesaputra, S.T.; Handayani, A.; Mengko, T.R. Brain Tumor Classification using Convolutional Neural Network. In Proceedings of the World Congress on Medical Physics and Biomedical Engineering 2018, Prague, Czech Republic, 3–8 June 2019; pp. 183–189. [Google Scholar]
  8. Mishra, S.K.; Deepthi, V.H. Brain image classification by the combination of different wavelet transforms and support vector machine classification. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 6741–6749. [Google Scholar] [CrossRef]
  9. Chen, B.; Zhang, L.; Chen, H.; Liang, K.; Chen, X. A novel extended Kalman filter with support vector machine based method for the automatic diagnosis and segmentation of brain tumors. Comput. Methods Programs Biomed. 2021, 200, 105797. [Google Scholar] [CrossRef]
  10. Bonte, S.; Goethals, I.; Van, R.H. Machine learning based brain Tumour segmentation on limited data using local texture and abnormality. Comput. Biol. Med. 2018, 98, 39–47. [Google Scholar] [CrossRef] [PubMed]
  11. Basha, A.J.; Balaji, B.S.; Poornima, S.; Prathilothamai, M.; Venkatachalam, K. Support vector machine and simple recurrent network based automatic sleep stage classification of fuzzy kernel. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 6189–6197. [Google Scholar] [CrossRef]
  12. Konur, U. Computerized detection of spina bifida using SVM with Zernike moments of fetal skulls in ultrasound screening. Biomed. Signal Process. Control. 2018, 43, 18–30. [Google Scholar] [CrossRef]
  13. Xiao, J.; Tong, Y. Research of brain MRI image segmentation algorithm based on FCM and SVM. In Proceedings of the 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May–2 June 2014; pp. 1–6. [Google Scholar]
  14. Gupta, M.; Rajagopalan, V.; Pioro, E.P.; Rao, B.P. Volumetric analysis of MR images for glioma classification and their effect on brain tissues. Signal Image Video Process. 2017, 11, 1337–1345. [Google Scholar] [CrossRef]
  15. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  16. Swati, Z.N.K.; Zhao, Q.; Kabir, M.; Ali, F.; Ali, Z.; Ahmed, S.; Lu, J. Brain tumor classification for MR images using transfer learning and fine-tuning. Comput. Med. Imaging Graph. 2019, 75, 34–46. [Google Scholar] [CrossRef]
  17. Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. Greenspan, GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef] [Green Version]
  18. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.; Larochelle, H. Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst. Signal Process. 2019, 39, 757–775. [Google Scholar] [CrossRef]
  20. Noreen, N.; Palaniappan, S.; Qayyum, A.; Ahmad, I.; Imran, M.; Shoaib, M. A Deep Learning Model Based on Concatenation Approach for the Diagnosis of Brain Tumor. IEEE Access 2020, 8, 55135–55144. [Google Scholar] [CrossRef]
  21. Afshar, P.; Mohammadi, A.; Plataniotis, K.N. Brain Tumor Type Classification via Capsule Networks. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3129–3133. [Google Scholar]
  22. Li, M.; Kuang, L.; Xu, S.; Sha, Z. Brain Tumor Detection Based on Multimodal Information Fusion and Convolutional Neural Network. IEEE Access 2019, 7, 180134–180146. [Google Scholar] [CrossRef]
  23. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  24. Zhou, Y.; Li, Z.; Zhu, H.; Chen, C.; Gao, M.; Xu, K.; Xu, J. Holistic Brain Tumor Screening and Classification based on DenseNet and Recurrent Neural Network. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Proceedings of the Conjunction with MICCAI 2018, Granada, Spain, 16 September 2018; Revised Selected Papers, Part I 4; Springer: Berlin/Heidelberg, Germany, 2018; pp. 208–217. [Google Scholar]
  25. Pashaei, A.; Sajedi, H.; Jazayeri, N. Brain Tumor Classification via Convolutional Neural Network and Extreme Learning Machines. In Proceedings of the 2018 8th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 25–26 October 2018; pp. 314–319. [Google Scholar]
  26. Chen, X.; Wang, X.; Zhang, K.; Fung, K.M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444. [Google Scholar] [CrossRef] [PubMed]
  27. Wang, L.; Wang, H.; Huang, Y.; Yan, B.; Chang, Z.; Liu, Z.; Zhao, M.; Cui, L.; Song, J.; Li, F. Trends in the application of deep learning networks in medical image analysis: Evolution between 2012 and 2020. Eur. J. Radiol. 2022, 146, 110069. [Google Scholar] [CrossRef] [PubMed]
  28. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  29. Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning Transferable Features with Deep Adaptation Networks. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2015; JMLR. pp. 97–105. [Google Scholar]
  30. Deepak, S.; Ameer, P.M. Brain tumor classification using deep CNN features via transfer learning. Comput. Biol. Med. 2019, 111, 103345. [Google Scholar] [CrossRef] [PubMed]
  31. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  32. Cheng, J. Brain Tumor Dataset. Figshare. Dataset. 2017. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/1512427 (accessed on 3 April 2017).
  33. Zhang, X.; Zhang, X. Global Learnable Pooling with Enhancing Distinctive Feature for Image Classification. IEEE Access 2020, 8, 98539–98547. [Google Scholar] [CrossRef]
  34. Hsiao, T.Y.; Chang, Y.C.; Chou, H.H.; Chiu, C.T. Filter-based Deep-Compression with Global Average Pooling for Convolutional Networks. In Proceedings of the 2018 IEEE International Workshop on Signal Processing Systems (SiPS), Anaheim, CA, USA, 26–29 November 2018; pp. 247–251. [Google Scholar]
  35. Li, J.; Han, Y.; Zhang, M.; Li, G.; Zhang, B. Multi-scale residual network model combined with Global Average Pooling for action recognition. Multimed. Tools Appl. 2022, 81, 1375–1393. [Google Scholar] [CrossRef]
  36. Jun, W.; Liyuan, Z. Brain Tumor Classification Based on Attention Guided Deep Learning Model. Int. J. Comput. Intell. Syst. 2022, 15, 1–9. [Google Scholar] [CrossRef]
  37. Masood, M.; Nazir, T.; Nawaz, M.; Mehmood, A.; Rashid, J.; Kwon, H.Y.; Mahmood, T.; Hussain, A. A novel deep learning method for recognition and classification of brain tumors from MRI images. Diagnostics 2021, 11, 744. [Google Scholar] [CrossRef]
  38. Yu, X.; Aziz, M.Z.; Sadiq, M.T.; Jia, K.; Fan, Z.; Xiao, G. Computerized multidomain EEG classification system: A new paradigm. IEEE J. Biomed. Health Inform. 2022, 26, 3626–3637. [Google Scholar] [CrossRef]
  39. Sadiq, M.T.; Aziz, M.Z.; Almogren, A.; Yousaf, A.; Siuly, S.; Rehman, A.U. Exploiting pretrained CNN models for the development of an EEG-based robust BCI framework. Comput. Biol. Med. 2022, 143, 105242. [Google Scholar] [CrossRef]
  40. Huang, B.; Xu, H.; Yuan, M.; Aziz, M.Z.; Yu, X. Exploiting Asymmetric EEG Signals with EFD in Deep Learning Domain for Robust BCI. Symmetry 2022, 14, 2677. [Google Scholar] [CrossRef]
Figure 1. Proposed DCNN Framework for Brain Tumor Classification.
Figure 1. Proposed DCNN Framework for Brain Tumor Classification.
Processes 11 00679 g001
Figure 2. Sample brain tumor dataset from Figshare (a) Glioma (b) meningioma (c) pituitary.
Figure 2. Sample brain tumor dataset from Figshare (a) Glioma (b) meningioma (c) pituitary.
Processes 11 00679 g002
Figure 3. Images Obtained after Application of Data Augmentation Technique.
Figure 3. Images Obtained after Application of Data Augmentation Technique.
Processes 11 00679 g003
Figure 4. Proposed Transfer Learning based Framework for Brain Tumor Classification using with Pre-trained VGG-16 Network.
Figure 4. Proposed Transfer Learning based Framework for Brain Tumor Classification using with Pre-trained VGG-16 Network.
Processes 11 00679 g004
Figure 5. Tumors Detection Results using Proposed Framework (a) Original brain MR Image with tumor (b) White matter (c) Segmented Image (d) Gray matter (e) Skull-stripped image (f) Extracted Tumor.
Figure 5. Tumors Detection Results using Proposed Framework (a) Original brain MR Image with tumor (b) White matter (c) Segmented Image (d) Gray matter (e) Skull-stripped image (f) Extracted Tumor.
Processes 11 00679 g005
Figure 6. Confusion Matrix of the proposed framework.
Figure 6. Confusion Matrix of the proposed framework.
Processes 11 00679 g006
Figure 7. (a) Receiver operator characteristics (ROC) curve (b) Precision-recall (PR) curve of the proposed approach.
Figure 7. (a) Receiver operator characteristics (ROC) curve (b) Precision-recall (PR) curve of the proposed approach.
Processes 11 00679 g007
Figure 8. Learning curve of the proposed framework illustrating percentage of accuracy versus number of iterations.
Figure 8. Learning curve of the proposed framework illustrating percentage of accuracy versus number of iterations.
Processes 11 00679 g008
Table 1. Network Hyper parameters setting in our proposed methodology.
Table 1. Network Hyper parameters setting in our proposed methodology.
Parameters UsedValues
Batch size20
Dropout rate0.2
Learning rate0.001
Epochs100
OptimizerAdam
Activation FunctionSoftmax
Vertical flip0.5
Horizontal flip0.5
Random brightness contrast0.3
Shift scale rotate0.5
Table 2. Experimental Results of Proposed Transfer Learning based DCNN Model on Figshare dataset.
Table 2. Experimental Results of Proposed Transfer Learning based DCNN Model on Figshare dataset.
Accuracy (%)Sensitivity (%)Specificity (%)Precision (%)
98.9398.6899.1399.11
Table 3. Accuracy Percentage of Each Types of Tumors obtained by Proposed Approach.
Table 3. Accuracy Percentage of Each Types of Tumors obtained by Proposed Approach.
Types of BrainTumorsNumber of Brain MRI Slices Used for TestingAccuracyMean Accuracy
Meningioma10697.8898.51
Glioma21499.29
Pituitary13998.38
Table 4. MSE Loss at each convolutional Layer of Proposed Transfer Learning based DCNN Model on Figshare dataset.
Table 4. MSE Loss at each convolutional Layer of Proposed Transfer Learning based DCNN Model on Figshare dataset.
LayersCONV-1CONV-2CONV-3CONV-4CONV-5
Loss0.1680.2710.3480.3950.462
Table 5. Performance Comparison of Proposed Transfer Learning based DCNN Model with Existing Learning based Approaches on Figshare dataset.
Table 5. Performance Comparison of Proposed Transfer Learning based DCNN Model with Existing Learning based Approaches on Figshare dataset.
MethodsYearModel UsedAccuracy (%)
Ismael et al. [3]2018NN91.90
Abiwinanda et al. [7]2019CNN84.19
Swati et al. [16]2019CNN94.82
Afshar et al. [29]2018CapsNet86.56
Pashaei et al. [39]2018CNN93.68
Jun et al. [40]2022Dual-attention98.61
Masood et al. [37]2021Custom Mask-RCNN98.34
Proposed Method2023VGG-16 CNN98.93
Table 6. Comaprative analysis of the results obtianed by the Proposed Transfer Learning based DCNN Model on Figshare dataset with and without data pre-processing.
Table 6. Comaprative analysis of the results obtianed by the Proposed Transfer Learning based DCNN Model on Figshare dataset with and without data pre-processing.
Evalation MetricsAccuracy (%)Sensitivity (%)Specificity (%)Precision (%)
With Data pre-processing98.9398.6899.1399.11
Without Data pre-processing97.8297.5697.8898.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malla, P.P.; Sahu, S.; Alutaibi, A.I. Classification of Tumor in Brain MR Images Using Deep Convolutional Neural Network and Global Average Pooling. Processes 2023, 11, 679. https://doi.org/10.3390/pr11030679

AMA Style

Malla PP, Sahu S, Alutaibi AI. Classification of Tumor in Brain MR Images Using Deep Convolutional Neural Network and Global Average Pooling. Processes. 2023; 11(3):679. https://doi.org/10.3390/pr11030679

Chicago/Turabian Style

Malla, Prince Priya, Sudhakar Sahu, and Ahmed I. Alutaibi. 2023. "Classification of Tumor in Brain MR Images Using Deep Convolutional Neural Network and Global Average Pooling" Processes 11, no. 3: 679. https://doi.org/10.3390/pr11030679

APA Style

Malla, P. P., Sahu, S., & Alutaibi, A. I. (2023). Classification of Tumor in Brain MR Images Using Deep Convolutional Neural Network and Global Average Pooling. Processes, 11(3), 679. https://doi.org/10.3390/pr11030679

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop