Next Article in Journal
Chemo-Free Treatment Using Anti-PD-1 Antibodies with Lenvatinib in Unresectable Gallbladder Cancer: PD-L1 May Be a Potential Biomarker for a Better Outcome
Next Article in Special Issue
Artificial Intelligence-Enhanced Quantitative Ultrasound for Breast Cancer: Pilot Study on Quantitative Parameters and Biopsy Outcomes
Previous Article in Journal
Autoptic Findings in Cases of Sudden Death Due to Kawasaki Disease
Previous Article in Special Issue
Applying Deep Transfer Learning to Assess the Impact of Imaging Modalities on Colon Cancer Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features

by
Venkatesan Rajinikanth
1,
P. M. Durai Raj Vincent
2,
C. N. Gnanaprakasam
3,
Kathiravan Srinivasan
4 and
Chuan-Yu Chang
5,6,*
1
Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India
2
School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
3
Department of Electronics and Instrumentation Engineering, St. Joseph’s College of Engineering, OMR, Chennai 600119, India
4
School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
5
Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
6
Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu 310401, Taiwan
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(11), 1832; https://doi.org/10.3390/diagnostics13111832
Submission received: 30 March 2023 / Revised: 8 May 2023 / Accepted: 19 May 2023 / Published: 23 May 2023
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer—2nd Edition)

Abstract

:
Several advances in computing facilities were made due to the advancement of science and technology, including the implementation of automation in multi-specialty hospitals. This research aims to develop an efficient deep-learning-based brain-tumor (BT) detection scheme to detect the tumor in FLAIR- and T2-modality magnetic-resonance-imaging (MRI) slices. MRI slices of the axial-plane brain are used to test and verify the scheme. The reliability of the developed scheme is also verified through clinically collected MRI slices. In the proposed scheme, the following stages are involved: (i) pre-processing the raw MRI image, (ii) deep-feature extraction using pretrained schemes, (iii) watershed-algorithm-based BT segmentation and mining the shape features, (iv) feature optimization using the elephant-herding algorithm (EHA), and (v) binary classification and verification using three-fold cross-validation. Using (a) individual features, (b) dual deep features, and (c) integrated features, the BT-classification task is accomplished in this study. Each experiment is conducted separately on the chosen BRATS and TCIA benchmark MRI slices. This research indicates that the integrated feature-based scheme helps to achieve a classification accuracy of 99.6667% when a support-vector-machine (SVM) classifier is considered. Further, the performance of this scheme is verified using noise-attacked MRI slices, and better classification results are achieved.

1. Introduction

The disease rate in mankind is rising progressively globally due to various reasons, and it is causing a heavy diagnostic burden to hospitals. Many recent procedures and aiding schemes have been developed to reduce these burdens to support the doctors who perform initial screening, disease-severity confirmation, treatment planning, and executing and monitoring recovery rate [1]. Furthermore, the development in computing facilities and improvement in artificial-intelligence (AI) schemes also helps reduce the diagnostic burden by implementing several machine-learning (ML) and deep-learning (DL) procedures to support various levels of disease diagnostic processes.
The 2020 report by the Global Cancer Observatory (GCO) confirmed that the incidence and death rate due to cancer is gradually increasing globally. Therefore, early detection is essential to completely cure the disease using the chosen medication procedures. Brain tumor is a medical emergency, and an early and accurate diagnosis is indispensable to execute promising treatment. In the GCO report, cancer associated with the brain was listed in the 19th position based on the incidence (308,102 cases) and in the 12th position based on the confirmed death rate (251,329 cases). Because of its severity, several measures are suggested by researchers to detect brain cancer using a chosen approach [2]. Therefore, accurate detection of cancer and its category is necessary and vital when treating patients.
There are severe physical and psychological issues associated with brain tumors (BTs) in humans. In order to determine how to treat them, an accurate diagnosis is necessary. As a consequence, the World Health Organization (WHO) issued guidelines to assist in the identification of brain tumors (BTs) and the class of each of these tumors in order to plan and implement the necessary treatment [3]. It is common practice in hospitals to perform medical-image verification procedures when it comes to the diagnosis of BTs, and radiological approaches, such as computed tomography (CT) and magnetic-resonance imaging (MRI), are standard procedures to be followed when a diagnosis of BT is determined. There are various techniques that can be used to carry out MRI; however, the FLAIR and T2 modalities tend to be the most commonly used ones due to their superior visibility compared to the T1, T1-contrast (T1C), and diffused-weight (DW) techniques, and hence are widely adopted due to their multi-modality nature [4].
The clinical-level exposure of the BT involves segmenting the affected section to confirm the harshness and position of the tumor in the brain. Further, computer algorithms are also widely employed in modern hospitals to examine the radiological images to detect BTs with improved accuracy. Computerized procedures, combined with recent AI schemes such as ML and DL techniques, are commonly employed in (i) classification and (ii) segmentation tasks, which helps to lessen the investigative time noticeably. The work of Rajinikanth et al. [5] on BT detection based on MRI slices confirmed that computer algorithms act as an aiding tool for the radiologist and help to provide the initial level-assessment report. The neurologist confirms this report and finally decides on the treatment to be executed [6].
Recent works on BT detection confirm that DL approaches help to attain improved accuracy compared with ML schemes [7,8]. Hence, researchers have developed several DL schemes to examine benchmark and clinical MRI slices with the tumor [9,10]. Further, earlier works also confirmed that the outcome of the pretrained DL structure can be upgraded by employing optimization-algorithm-based feature selection [11], an ensemble of feature approach [12], fused deep features [13], and serially integrated DL and ML features [14]. The goal of any computer algorithm is to achieve an enhanced BT recognition irrespective of the MRI modality and its quality. The former research on BT detection can be accessed in [15,16].
This research develops a BT-detection framework using serially integrated ML and DL features to achieve better accuracy during FLAIR- and T2-modality image analysis. This framework involves the subsequent stages: (i) data collection, 3D to 2D transformation, and pre-processing; (ii) deep-feature extraction using a pretrained scheme; (iii) feature mining of the shape of the tumor; (iv) feature selection using the elephant-herding algorithm (EHA); and (v) binary classification and verification using three-fold cross-validation. This work investigates the following DL methods: VGG16, VGG19, ResNet50, ResNet101, and DenseNet101. Further, watershed segmentation (WS) is also considered to segment the tumor region to obtain the necessary ML features, such as gray-level co-occurrence matrices (GLCMs) and Hu moments. The serial integration of DL and ML (DL + ML) features is considered to achieve better BT detection.
Using EHA-based optimization, this paper reduces the DL features in order to avoid the issue of overfitting. As a first step towards verifying the significance of the proposed scheme in this study, well-known benchmark BT datasets, such as BRATS2015 [17,18] and TCIA [19,20], are utilized. The results of this study indicate that the proposed work is significant for analyzing FLAIR/T2 MRI slices. It is also confirmed that a clinical significance can be derived from taking into consideration the MRI slices that are associated with noise (Gaussian noise) as part of the proposed technique. According to the experimental result of this study, with the serially fused features of the considered axial-plane MRI slices, the developed classification scheme contributes to a classification accuracy greater than 99% in cases of the considered axial-plane MRI slices.
The contributions of this research work are discussed below:
  • Development of a unique procedure to examine FLAIR- and T2-modality MRI slice with/without the skull region;
  • Integrated DL and ML features to achieve better BT-detection performance;
  • EHA-based feature optimization to obtain better results without the overfitting issue;
  • Verifying the performance of the proposed scheme using a clinical MRI dataset of the T2-modality.
Section 2 presents the previous studies on BT classification, Section 3 illustrates the methodology, and Section 4 and Section 5 present the results and conclusion of this study, respectively.

2. Literature Review

The development of a computerized algorithm for automatically detecting BT in a chosen MRI slice has gained attention among researchers [21,22,23]. Hence, several automatic procedures have been proposed and implemented to assess various modalities of MRI slices. In recent procedures, binary classifiers were used to classify MRI slices using customized/pre-trained deep-learning methods, resulting in more accurate results. In addition, several procedures were implemented in earlier studies to improve BT-detection accuracy, and Table 1 summarizes a few chosen approaches.
The above table summarizes a few earlier works that implemented the BRATS and TCIA databases, and the maximum accuracy presented in these works was 99.29%. Hence, this works implemented a methodology to improve BT detection to achieve an accuracy of >99.29%.

3. Methodology

When brain MRI is taken into account, the DL-based BT-detection procedure can result in a higher degree of accuracy. Thus, the purpose of this research was to develop an appropriate BT-detection procedure that can be used to analyze MRI slices of the FLAIR and T2 modalities. Moreover, the proposed scheme was tested on MRI slices corrupted with noise in order to confirm its ability to detect BTs in the presence of adversarial attacks. Thus, even if the MRI is associated with noise, the proposed deep-learning scheme would be able to detect BT efficiently.

3.1. Disease-Detection Scheme

As depicted in Figure 1, this study collected the necessary MRI data from the benchmark datasets (BRATS2015 and TCIA). As these images were in 3D format, the ITK-Snap tool [41] was used to obtain 2D slices (axial plane) from them, and these slices were used to confirm the merit of the developed algorithm. This framework utilizes deep-learning features to classify MRI slices and improve detection performance. Following the extraction of the tumor section using WS, the DL features are serially combined with the ML features. In addition, this technique was optimized to minimize overfitting results by using EHA-based feature selection. Finally, to verify the eminence of the developed system on MRI slices with or without a skull section, binary classification with a three-fold cross-validation methodology was implemented, followed by evaluating the achieved performance metric.
In order to achieve superior detection accuracy when it comes to BT when both FLAIR- and T2-modality MRI are considered, the technique executes a novel procedure that combines the features of DL and ML to achieve a better detection accuracy by integrating the features of the two techniques. Based on the benchmark images, this scheme was tested and verified, and it was confirmed from the proposed outcomes that this scheme will help in achieving better accuracy in benchmark images. As a result, it was confirmed that this scheme is better and can be considered for use in order to examine clinical MRI images with the skull section as well.

3.2. MRI Dataset

The availability of clinical-grade brain-MRI slices for research purposes is minimal, as discussed in [1], and accurate clinical MRI images are protected due to ethical issues. Most of these images are available for clinical study instead of academic research. Hence, most of the research considered the benchmark-image datasets (authentic clinical and synthetic images) available for academic research. This work considered the Brain Tumor Segmentation (challenge database (BRATS2015)) and The Cancer Imaging Archive (TCIA) datasets for the examination. From BRATS, skull-stripped FLAIR-MRI slices of low-/high-grade glioma (LGG/HGG) were considered for examination, and earlier works on these images can be found in [17,18]. The TCIA database consists of LGG- [19] and glioblastoma (GBM)-class MRI [20], and earlier works on these databases can be accessed in [25,26,27,28].
BRATS and TCIA provided a 3D picture, from which 2D slices were obtained using ITK-Snap, and these slices were then considered to authenticate the performance of the planned system. This work considered 1500 MRI slices (dimension of pixels) from each class for the examination. Every image was resized to the required size and enhanced using the CLAHE technique to advance prominence of the tumor section. The sample test-image set is shown in Figure 2, and the considered image value is presented in Table 2. Figure 2a,b show the FLAIR- and T2-modality slices. In Table 2, Class 1 represents LGG and Class 2 represents HGG/GBM. The earlier DL-based classification of these datasets can be found in [1,4,5].
Along with the benchmark-image datasets, this research also considered clinically collected T2-modality MRI slices from real patients, as shown in Figure 3. Figure 3a,b present the LGG and GBM images from the clinical dataset [42,43].

3.3. Feature Extraction

For this study, DL and ML features were considered for the examination, as the performance of ML and DL schemes was primarily determined by the information extracted from the chosen test images.

3.3.1. Deep Features

The literature on BT detection in MRI with the DL scheme confirms that DL procedures help achieve better detection than other available image-examination schemes. Hence, DL-based BT-detection procedures have been widely adopted by researchers [26,27,28,29,30,31,32]. The following initial values were assigned for the chosen DL schemes: learning rate = 1 × 10−5, Adam optimization, ReLu activation, total iteration = 1000, total epochs = 150, and classification with SoftMax unit using three-fold cross-validation. The proposed work considered the mined features to classify the chosen MRI slices. Every DL scheme was allowed to produce a one-dimensional feature vector of size, and it was then considered for the disease-detection task as in Equations (1) and (2).
DLfeature VGG 16 1 × 1 × 1000 = VGG 16 1 , 1 , VGG 16 1 , 2 , , VGG 16 1 , 1000
DLfeature DenseNet 1 × 1 × 1000 = DenseNet 1 , 1 , DenseNet 1 , 2 , , DenseNet 1 , 1000

3.3.2. Tumor Features

Machine-learning features such as GLCMs and Hu moments were considered in addition to DL features to improve BT detection. The ML and DL features were combined in this work to improve accuracy, and this process is discussed in this subsection. More information about these features can be found in [5,11]. In this work, the watershed-segmentation (WS) procedure was used to mine the MRI slices of the tumor. A WS scheme uses an automatic algorithm to mine the tumor region through edge detection, morphological operations, tumor enhancements, and extractions. The marker dimension primarily determines the WS scheme; in this case, it was assigned a score of 10 [44]. Figure 4 depicts the result of WS for a chosen image from TCIA and Figure 4a,b depict the original image and a noise-corrupted image. Figure 4c,d show the segmented tumor, which helped to achieve the necessary tumor features, such as GLCMs and Hu moments [4,5]. The binary tumor section was considered to obtain 25 GLCM features and three Hu features. The merit of the WS was confirmed with delicate and noisy images, and the proposed segmentation approach worked well on fine and noise-corrupted images.
The proposed WS effectively extracted the tumor in LGG-, HGG-, and GBM-class MRI slices. The extracted BT was in the form of the binary, and it effectively provided the GLCMs and Hu moments. These features were considered to improve the BT-detection accuracy by integrating it with the DL features. The ML features considered in this study can be found in Equations (3) and (4):
GLCM 1 × 1 × 25 = GLCM 1 , 1 , GLCM 1 , 2 , , GLCM 1 , 25
Hu 1 × 1 × 3 = Hu 1 , 1 , Hu 1 , 2 , Hu 1 , 3

3.3.3. Feature Optimization

When a large amount of data information is considered in a disease-detection task, ML and DL may provide an over-fitting result, which must be avoided to validate the importance of this technique. Because of this, feature-selection/-optimization schemes have been extensively employed in the literature, and the reduced feature has then been considered to verify the performance of the implemented scheme. However, the feature-reduction task can be implemented using the conventional approach known as Student’s t-test, which involves more computation to achieve a reduced feature vector. Hence, several heuristic algorithms have recently been employed to optimize the features, and in this work, the elephant-herding-algorithm (EHA)-based procedure was implemented for the data selection.
The EHA is a heuristic scheme developed by mimicking the food-foraging behavior of an elephant herd, which is guided by a group leader (matriarch). The essential information for the EHA can be found in [45]. The theory behind the EHA includes the following conditions: Each clan contains a fixed number of members (elephants) in the herd, and the adult male elephants leave the herd and live alone during each generation. An older matriarch commands each clan.
In order for the EHA to work, the following assumptions must be made:
  • The herd of elephants in each clan is stable;
  • Male elephants are separate from their groups in each generation;
  • Herds are led to food and water by older elephants (matriarchs).
The clan-updating and -separating process makes up the EHA-optimization search. An overview of the EHA is shown in Figure 5. Figure 5a,b present the initial distribution of elephants in the search space.
Equations (5)–(7) show the mathematical expression of the algorithm:
X n e w , c i , j = X c i , j + À X b e s t , c i X c i , j )
where c i denotes the matriarch of the clan, X c i , j is the earlier location, X n e w , c i , j is the updated location, j is the total number of elephants in the clan, and the random value of range ϵ 0 , 1 is denoted as À and .
New positions are also detected during each operation:
X n e w , c i , j = β X c e n t e r , c i , d
where X c e n t e r , c i , d = 1 n c i × i = 1 n c i X c i , j , d and d denotes the search dimension.
The performance of the separation process improves based on the number of generations. During this process, the male elephant is discarded and the positions of the other elephants are updated. This process is depicted in Figure 4b, and its mathematical expression can be seen in Equation (7):
X w o r s t , c i = X m i n + X m a x X m i n + 1 r a n d
where X m i n   and   X m a x are the limits, and r a n d is a random number.
The optimization process with the EHA can be found in [46], and in this work, it was considered to optimize the DL features. Similar work related to this task can be found in [47]. The initial parameters of the EHA were assigned as follows: number of members = 30, number of generations = 5, dimensions of search = 2, objective value = Cartesian distance, number of iterations ( I t e r m a x ) = 1500, and algorithm termination = I t e r m a x . The feature optimization helped reduce VGG16 and DenseNet101 features to a lower value, combined serially to improve the features. The EHA-based approach helped reduce the features to a lower level when the FLAIR-modality MRI was used, and this value is presented in Equations (8) and (9). Similarly, the reduced feature for the T2-modality MRI is shown in Equations (10) and (11):
DLfeature VGG 16 1 × 1 × 373 = VGG 16 1 , 1 , VGG 16 1 , 2 , , VGG 16 1 , 373
DLfeature DenseNet 1 × 1 × 416 = DenseNet 1 , 1 , DenseNet 1 , 2 , , DenseNet 1 , 416
DLfeature VGG 16 1 × 1 × 401 = VGG 16 1 , 1 , VGG 16 1 , 2 , , VGG 16 1 , 401
DLfeature DenseNet 1 × 1 × 428 = DenseNet 1 , 1 , DenseNet 1 , 2 , , DenseNet 1 , 428

3.4. Implementation

The experimental investigation of this study was implemented using a workstation with the following specifications: Intel i7, 16GB RAM, and 4GB VRAM. The procedures, such as tumor segmentation with WS and EHA-based feature optimization, were implemented using MATLAB. Other prime tasks, such as DL-feature mining and classification, were executed using PYTHON. The proposed work was separately tested and verified on the MRI slices of the FLAIR and T2 modalities, and the achieved results were recorded and analyzed. The dual deep features of this study were obtained by serially integrating the VGG16 and DenseNet101 features, and the DL+ML feature was obtained by serially combining the VGG16, GLCMs, and Hu moments. The feature optimization with the proposed EHA is shown in Figure 6. In this process, the optimal features were selected by comparing the tumor features (F1, F2,…, Fn) and considering the Cartesian distance (CD). Finally, the feature with the maximal CD was selected, and other features were discarded.
In this work, the feature dimension considered for the FLAIR modality was as follows: Dual deep = 1 × 1 × 789 and DL + ML = 1 × 1 × 401 . Similarly, the T2-modality examination was performed using the following features: Dual   deep = 1 × 1 × 829 and DL + ML = 1 × 1 × 429 . These features were then considered to classify the MRI using binary classification with three-fold cross-validation.

3.5. Performance Evaluation

A high-quality approach must be demonstrated before it can be recommended for clinical MRI assessment. The performance of the proposed system was verified by computing true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN) metrics. These values were then used to calculate accuracy, precision, sensitivity, and specificity. A binary classifier such as SoftMax, decision tree (DT), random forest (RF), K-nearest neighbor (KNN), and support-vector machine (SVM) was considered and the results were recorded [48,49].
The mathematical expression of these measures can be found in Equations (12)–(16):
Accuracy   AC = TP + TN TP + TN + FP + FN × 100
Precision   PR = TP TP + FP × 100
Sensitivity   SE = TP TP + FN × 100
Specificity   SP = TN TN + FP × 100
F 1 Score   F 1 S = 2 TP 2 TP + FP + FN × 100

4. Results and Discussion

The experimental results of this study are presented in this section. The proposed scheme was initially implemented in the BRATS2015 database (FLAIR MRI without skull), and the result of the LGG/HGG classification was recorded. First, the classification task was performed using the individual DL features, and Table 3 illustrates the results obtained from this experiment. Based on the classification-accuracy values of the DL schemes, this table indicates that VGG16 and DenseNet101 performed better on the chosen MRI slices than VGG19, ResNet50, and ResNet101. The same experiment was conducted with the TCIA dataset (T2 MRI with skull), and the obtained results during LGG/GBM classification can also be seen in Table 3. In addition, this confirms that VGG16 and DenseNet101 had higher classification accuracy than the other schemes in this study. Figure 7 presents the overall comparison of the performance metrics, with Figure 7a depicting LGG/HGGs and Figure 7b depicting LGG/GBMs.
Following identification of the best DL scheme, the proposed research was repeated using a dual deep feature and a serially integrated DL + ML scheme. Along with SoftMax, other classifiers were also considered during this process, and each approach was noted based on the performance measure achieved. Three-fold cross-validation was performed during this task, and the best outcome was considered the final result, as shown in Table 4. The results of this study indicate that DL + ML were more effective than other features considered in this study for classification. Using the KNN classifier combined with DL + ML features, this work provided the highest classification accuracy of 99.33% for the FLAIR-modality MRI slice. Table 5 illustrates the results of implementing a similar procedure on the T2-modality MRI slice from TCIA. The proposed DL + ML technique achieved an accuracy of 99.67% with SVM. These results confirm that the proposed scheme is more effective at detecting the BT when using FLAIR-/T2-modulation MRI.
This scheme was initially tested using the BRATS database, and then the same scheme was considered to analyze TCIA. Hence, this work provided a better detection result with TCIA compared to BRATS. The experimental result achieved in this research for DL + ML-based detection of BT for the TCIA dataset is presented in Figure 8. Figure 8a–e show the results of convolutional layers 1 to 5. Figure 8 presents the final result after training the system on the database. This confirms that the training and validation results in Figure 9a,b are better. The overall performance of the proposed scheme was verified using the confusion matrix (CM) and receiver-operating-characteristic (RoC) curve, as presented in Figure 10a,b. This confirms that the proposed scheme worked well for the considered task. Furthermore, the proposed works show that the CM achieved was superior, with better TP, FN, TN, and FP values. Moreover, the RoC achieved in this work showed an area under curve (AUC) of 0.972, confirming the proposed technique’s ability.
The commonly used spider plot was constructed using values from Table 4 and Table 5 to demonstrate the attained metrics using a graphical representation. Figure 11a,b present the results of the dual deep and DL + ML features for FLAIR-modality MRI. Figure 11c,d demonstrate the outcome achieved for the T2-modality MRI. These results confirm that the proposed technique helped to achieve a better outcome for the chosen MRI database and helped to achieve a better outcome for both of these modalities.
The investigational outcome of this study confirms that this scheme provided a better result and worked well on multi-modality MRI. The results achieved using (i) individual, (ii) dual deep, and (iii) fused DL + ML features provided a detection accuracy of >91% for LLG/HGGs and LGG/GBMs. Compared to the accuracies available in earlier works (Table 1), the proposed scheme provided a better result.
To confirm the merit of the work, the proposed scheme was also tested and validate on the LGG and GBM clinical databases considered in [48,49]. The experimental outcome confirms that this scheme helped obtain a detection accuracy of 98.92% with dual deep features and 98.95% with DL + ML, which is better than that of the earlier work by Rajinikanth et al. [5]. This result confirms that the executed procedure can work well on clinical MRI slices collected from patients.
The limitation of the proposed scheme is that it implements WS segmentation to extract the tumor section. In the future, a DL-segmentation technique can be considered to mine the tumor region to obtain the necessary ML features. Further, the merit of this architecture can be tested and verified using pretrained lightweight DL schemes found in the literature.

5. Conclusions

This research aims to develop and implement a procedure for analyzing tumors in FLAIR-/T2-MRI slices. For the assessment, axial-plane MRI slices were used. In addition, this study implemented DL and DL + ML-based procedures to improve the detection accuracy of MRI slices with or without skull sections. In the proposed experimental work, the VGG16 scheme was implemented to detect LGG/HGGs and LGG/GBMs from the considered databases. This study’s experimental results confirm that the proposed scheme worked well on the selected benchmark images, and its effectiveness can be verified with more clinical images in the future. Furthermore, when dual deep and DL + ML features were considered, this scheme achieved a higher BT-detection accuracy (accuracy > 99%). The results show that this approach worked well on MRI slices without the skull, and in the future, a skull-stripping procedure can be included in the preprocessing section to remove the skull. The experimental outcome of this research confirms that the proposed technique provides a better result and can be applied to the accurate analysis of patient data in the future.

Author Contributions

K.S. and C.-Y.C. conceptualized and supervised the research. V.R. and C.N.G. contributed to the development of the model, data processing, training procedures, and implementation of the model. K.S. and C.-Y.C. carried out the project administration and validated the results. V.R., P.M.D.R.V., C.N.G. and K.S. wrote the manuscript. V.R., P.M.D.R.V., C.N.G., K.S. and C.-Y.C. reviewed and edited the manuscript. C.-Y.C. carried out the funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Intelligent Recognition Industry Service Research Center from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan and the Ministry of Science and Technology in Taiwan (grant No. MOST 109-2221-E-224-048-MY2).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The databases considered in this research work can be accessed from; (i) https://www.smir.ch/BRATS/Start2015 (accessed on 11 December 2022), (ii) https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=5309188 (accessed on 11 December 2022) (iii) https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=1966258 (accessed on 11 December 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rajinikanth, V.; Raj, A.N.J.; Thanaraj, K.P.; Naik, G.R. A customized VGG19 network with concatenation of deep and handcrafted features for brain tumor detection. Appl. Sci. 2020, 10, 3429. [Google Scholar] [CrossRef]
  2. Sung, H.; Ferlay, J.; Siegel, R.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  3. Louis, D.N.; Perry, A.; Wesseling, P.; Brat, D.J.; Cree, I.A.; Figarella-Branger, D.; Hawkins, C.; Ng, H.K.; Pfister, S.M.; Reifenberger, G.; et al. The 2021 WHO classification of tumors of the central nervous system: A summary. Neuro-Oncology 2021, 23, 1231–1251. [Google Scholar] [CrossRef] [PubMed]
  4. Rajinikanth, V.; Kadry, S. Development of a framework for preserving the disease-evidence-information to support efficient disease diagnosis. Int. J. Data Warehous. Min. (IJDWM) 2021, 17, 63–84. [Google Scholar] [CrossRef]
  5. Rajinikanth, V.; Kadry, S.; Nam, Y. Convolutional-neural-network assisted segmentation and svm classification of brain tumor in clinical MRI slices. Inf. Technol. Control 2021, 50, 342–356. [Google Scholar] [CrossRef]
  6. Hossain, A.; Islam, M.; Beng, G.; Kashem, S.; Soliman, M.S.; Misran, N.; Chowdhury, M. Microwave brain imaging system to detect brain tumor using metamaterial loaded stacked antenna array. Sci. Rep. 2022, 12, 16478. [Google Scholar] [CrossRef]
  7. Deepak, S.; Ameer, P. Automated categorization of brain tumor from mri using cnn features and svm. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 8357–8369. [Google Scholar] [CrossRef]
  8. Zhu, Z.; Khan, M.; Wang, S.H.; Zhang, Y. RBEBT: A ResNet-Based BA-ELM for Brain Tumor Classification. Cmc-Comput. Mater. Contin. 2023, 74, 101–111. [Google Scholar] [CrossRef]
  9. Belaid, O.N.; Loudini, M. Classification of brain tumor by combination of pre-trained vgg16 cnn. J. Inf. Technol. Manag. 2020, 12, 13–25. [Google Scholar]
  10. Sarkar, S.; Kumar, A.; Chakraborty, S.; Aich, S.; Sim, J.-S.; Kim, H.-C. A CNN based approach for the detection of brain tumor using MRI scans. Test Eng. Manag. 2020, 83, 16580–16586. [Google Scholar]
  11. Pugalenthi, R.; Rajakumar, M.; Ramya, J.; Rajinikanth, V. Evaluation and classification of the brain tumor MRI using machine learning technique. J. Control Eng. Appl. Inform. 2019, 21, 12–21. [Google Scholar]
  12. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. An ensemble learning approach for brain cancer detection exploiting radiomic features. Comput. Methods Programs Biomed. 2020, 185, 105134. [Google Scholar] [CrossRef] [PubMed]
  13. Sethy, P.; Behera, S. A data constrained approach for brain tumour detection using fused deep features and SVM. Multimed. Tools Appl. 2021, 80, 28745–28760. [Google Scholar] [CrossRef]
  14. Raza, A.; Ayub, H.; Khan, J.A.; Ahmad, I.; SSalama, A.; Daradkeh, Y.I.; Javeed, D.; Ur Rehman, A.; Hamam, H. A hybrid deep learning-based approach for brain tumor classification. Electronics 2022, 11, 1146. [Google Scholar] [CrossRef]
  15. Kadry, S.; Rajinikanth, V.; Raja, N.; Hemanth, D.J.; Hannon, N.; Raj, A. Evaluation of brain tumor using brain MRI with modified-moth-flame algorithm and Kapur’s thresholding: A study. Evol. Intell. 2021, 14, 1053–1063. [Google Scholar] [CrossRef]
  16. Xiao, Y.; Yin, H.; Wang, S.-H.; Zhang, Y.-D. TReC: Transferred ResNet and CBAM for Detecting Brain Diseases. Front. Neuroinform. 2021, 15, 71. [Google Scholar] [CrossRef]
  17. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef]
  18. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.S.; Freymann, J.B.; Farahani, K.; Davatzikos, C. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat. Sci. Data 2017, 4, 170117. [Google Scholar] [CrossRef]
  19. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.; Freymann, J.; Farahani, K.; Davatzikos, C. Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection. Cancer Imaging Arch. 2017, 286. [Google Scholar] [CrossRef]
  20. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.; Freymann, J.; Farahani, K.; Davatzikos, C. Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection. Cancer Imaging Arch. 2017. [Google Scholar] [CrossRef]
  21. Srinivasan, K.; Ankur, A.; Sharma, A. Super-resolution of Magnetic Resonance Images using deep Convolutional Neural Networks. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics—Taiwan (ICCE-TW), Taipei, Taiwan, 12–14 June 2017; pp. 41–42. [Google Scholar] [CrossRef]
  22. Ramaneswaran, S.; Srinivasan, K.; Vincent, P.M.D.R.; Chang, C.-Y. Hybrid Inception v3 XGBoost Model for Acute Lymphoblastic Leukemia Classification. Comput. Math. Methods Med. 2021, 2021, 2577375. [Google Scholar] [CrossRef]
  23. Kathiravan, S.; Kanakaraj, J. A Review on Potential Issues and Challenges in MR Imaging. Sci. World J. 2013, 2013, 783715. [Google Scholar] [CrossRef] [PubMed]
  24. Kalaiselvi, T.; Padmapriya, S.T.; Sriramakrishnan, P.; Somasundaram, K. Deriving tumor detection models using convolutional neural networks from MRI of human brain scans. Int. J. Inf. Technol. 2020, 12, 403–408. [Google Scholar] [CrossRef]
  25. Raja, P. Brain tumor classification using a hybrid deep autoencoder with Bayesian fuzzy clustering-based segmentation approach. Biocybern. Biomed. Eng. 2020, 40, 440–453. [Google Scholar] [CrossRef]
  26. Amin, J.; Sharif, M.; Gul, N.; Raza, M.; Anjum, M.; Nisar, M.; Bukhari, S. Brain tumor detection by using stacked autoencoders in deep learning. J. Med. Syst. 2020, 44, 32. [Google Scholar] [CrossRef] [PubMed]
  27. Özyurt, E.; Avcı, D. An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine. Med. Hypotheses 2020, 134, 109433. [Google Scholar] [CrossRef] [PubMed]
  28. Anilkumar, B.; Kumar, P. Tumor classification using block wise fine tuning and transfer learning of deep neural network and KNN classifier on MR brain images. Int. J. Emerg. Trends Eng. Res. 2020, 8, 574–583. [Google Scholar] [CrossRef]
  29. Sharif, M.I.; Li, J.P.; Khan, M.A.; Saleem, M.A. Active deep neural network features selection for segmentation and recognition of brain tumors using MRI images. Pattern Recognit. Lett. 2020, 129, 181–189. [Google Scholar] [CrossRef]
  30. Han, C.; Rundo, L.; Araki, R.; Furukawa, Y.; Mauri, G.; Nakayama, H.; Hayashi, H. Infinite Brain Mr Images: PGGAN-Based Data Augmentation for Tumor Detection; Springer: Singapore, 2020; pp. 291–303. [Google Scholar]
  31. Amin, J.; Sharif, M.; Yasmin, M.; Saba, T.; Anjum, M.A.; Fernandes, S.L. A new approach for brain tumor segmentation and classification based on score level fusion using transfer learning. J. Med. Syst. 2019, 43, 326. [Google Scholar] [CrossRef]
  32. Siar, M.; Teshnehlab, M. Brain tumor detection using deep neural network and machine learning algorithm. In Proceedings of the 2019 9th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 24–25 October 2019; pp. 363–368. [Google Scholar] [CrossRef]
  33. Krishnammal, P.M.; Raja, S.S. Convolutional neural network based image classification and detection of abnormalities in MRI brain images. In Proceedings of the 2019 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 4–6 April 2019; pp. 548–553. [Google Scholar] [CrossRef]
  34. Ezhilarasi, R.; Varalakshmi, P. Tumor detection in the brain using faster R-CNN. In Proceedings of the 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC) I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 30–31 August 2018; pp. 388–392. [Google Scholar] [CrossRef]
  35. Antony, A.; Fathima, K.A.; Raju, D.; Binish, M.C. Brain tumor detection and classification in mri images. Int. J. Innov. Res. Sci. Eng. Technol. 2017, 6, 84–89. [Google Scholar]
  36. Pandian, A.A.; Balasubramanian, R. Fusion of contourlet transform and zernike moments using content based image retrieval for MRI brain tumor images. Indian J. Sci. Technol. 2016, 9, 1–8. [Google Scholar] [CrossRef]
  37. Gudigar, A.; Raghavendra, U.; Rao, T.N.; Samanth, J.; Rajinikanth, V.; Satapathy, S.C.; Ciaccio, E.J.; Wai Yee, C.; Acharya, U.R. FFCAEs: An efficient feature fusion framework using cascaded autoencoders for the identification of gliomas. Int. J. Imaging Syst. Technol. 2023, 33, 483–494. [Google Scholar] [CrossRef]
  38. Demir, F.; Akbulut, Y.; Taşcı, B.; Demir, K. Improving brain tumor classification performance with an effective approach based on new deep learning model named 3ACL from 3D MRI data. Biomed. Signal Process. Control 2023, 81, 104424. [Google Scholar] [CrossRef]
  39. Qureshi, S.A.; Hussain, L.; Ibrar, U.; Alabdulkreem, E.; Nour, M.K.; Alqahtani, M.S.; Nafie, F.M.; Mohamed, A.; Mohammed, G.P.; Duong, T.Q. Radiogenomic classification for MGMT promoter methylation status using multi-omics fused feature space for least invasive diagnosis through mpMRI scans. Sci. Rep. 2023, 13, 3291. [Google Scholar] [CrossRef]
  40. Shelatkar, T.; Urvashi, D.; Shorfuzzaman, M.; Alsufyani, A.; Lakshmanna, K. Diagnosis of brain tumor using light weight deep learning model with fine-tuning approach. Comput. Math. Methods Med. 2022, 2022, 2858845. [Google Scholar] [CrossRef] [PubMed]
  41. Available online: http://www.itksnap.org/pmwiki/pmwiki.php (accessed on 11 December 2022).
  42. Fernandes, S.L.; Tanik, U.J.; Rajinikanth, V.; Karthik, K.A. A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Comput. Appl. 2020, 32, 15897–15908. [Google Scholar] [CrossRef]
  43. Dey, N.; Rajinikanth, V.; Shi, F.; Tavares, J.M.R.; Moraru, L.; Karthik, K.A.; Lin, H.; Kamalanand, K.; Emmanuel, C. Social-Group-Optimization based tumor evaluation tool for clinical brain MRI of Flair/diffusion-weighted modality. Biocybern. Biomed. Eng. 2019, 39, 843–856. [Google Scholar] [CrossRef]
  44. Rajinikanth, V.; Thanaraj, K.; Satapath, S.C.; Fernandes, S.; Dey, N. Shannon’s entropy and watershed algorithm based technique to inspect ischemic stroke wound. In Smart Intelligent Computing and Applications; Springer: Singapore, 2019; pp. 23–31. [Google Scholar] [CrossRef]
  45. Wang, G.; Deb, S.; Coelho, L. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar] [CrossRef]
  46. Ali, M.A.; Balasubramanian, K.; Krishnamoorthy, G.D.; Muthusamy, S.; Pandiyan, S.; Panchal, H.; Mann, S.; Thangaraj, K.; El-Attar, N.E.; Abualigah, L.; et al. Classification of glaucoma based on elephant-herding optimization algorithm and deep belief network. Electronics 2022, 11, 1763. [Google Scholar] [CrossRef]
  47. Nayak, M.; Das, S.; Bhanja, U.; Senapati, M. Elephant herding optimization technique based neural network for cancer prediction. Inform. Med. Unlocked 2020, 21, 100445. [Google Scholar] [CrossRef]
  48. Lu, S.-Y.; Zhang, Z.; Zhang, Y.-D.; Wang, S.-H. CGENet: A deep graph model for COVID-19 detection based on chest CT. Biology 2022, 11, 33. [Google Scholar] [CrossRef]
  49. Lu, S.; Wang, S.-H.; Zhang, Y.-D. Detection of abnormal brain in MRI via improved AlexNet and ELM optimized by chaotic bat algorithm. Neural Comput. Appl. 2021, 33, 10799–10811. [Google Scholar] [CrossRef]
Figure 1. Structure of the proposed scheme to examine BT in MRI slices of FLAIR/T2 modalities with and without the skull.
Figure 1. Structure of the proposed scheme to examine BT in MRI slices of FLAIR/T2 modalities with and without the skull.
Diagnostics 13 01832 g001
Figure 2. Sample test images of BRATS and TCIA. (a) FLAIR-modality MRI slices collected from BRATS, (b) T2-modality MRI slices collected from TCIA.
Figure 2. Sample test images of BRATS and TCIA. (a) FLAIR-modality MRI slices collected from BRATS, (b) T2-modality MRI slices collected from TCIA.
Diagnostics 13 01832 g002
Figure 3. Clinically collected T2-modality MRI slices. (a) LGG, (b) GBM.
Figure 3. Clinically collected T2-modality MRI slices. (a) LGG, (b) GBM.
Diagnostics 13 01832 g003
Figure 4. Tumor-section extraction from the chosen MRI slice with the watershed algorithm. (a) Original image, (b) noisy image, (c) tumor from original image, (d) tumor from noisy image.
Figure 4. Tumor-section extraction from the chosen MRI slice with the watershed algorithm. (a) Original image, (b) noisy image, (c) tumor from original image, (d) tumor from noisy image.
Diagnostics 13 01832 g004
Figure 5. The working procedure of EHA. (a) Initial EHA, (b) EHA-based optimization.
Figure 5. The working procedure of EHA. (a) Initial EHA, (b) EHA-based optimization.
Diagnostics 13 01832 g005
Figure 6. Feature-selection process with the EHA.
Figure 6. Feature-selection process with the EHA.
Diagnostics 13 01832 g006
Figure 7. Glyph plot of Table 3 values. (a) LGG/HGGs, (b) LGG/GBMs.
Figure 7. Glyph plot of Table 3 values. (a) LGG/HGGs, (b) LGG/GBMs.
Diagnostics 13 01832 g007
Figure 8. Various convolutional outcomes achieved during the DL + ML-based classification. (a) Convolution 1, (b) convolution 2, (c) convolution 3, (d) convolution 4, (e) convolution 5.
Figure 8. Various convolutional outcomes achieved during the DL + ML-based classification. (a) Convolution 1, (b) convolution 2, (c) convolution 3, (d) convolution 4, (e) convolution 5.
Diagnostics 13 01832 g008
Figure 9. Training and validation outcome achieved with DL + ML classification. (a) Accuracy, (b) loss.
Figure 9. Training and validation outcome achieved with DL + ML classification. (a) Accuracy, (b) loss.
Diagnostics 13 01832 g009
Figure 10. Confusion-matrix and RoC achieved with DL + ML classification. (a) CM, (b) RoC.
Figure 10. Confusion-matrix and RoC achieved with DL + ML classification. (a) CM, (b) RoC.
Diagnostics 13 01832 g010
Figure 11. Spider plot achieved with the classification results of FLAIR- and T2-modality MRI. (a) Performance measure for FLAIR-modality dual deep features, (b) performance measure for FLAIR-modality DL + ML features, (c) Performance measure for T2-modality dual deep features, (d) Performance measure for T2 modality DL + ML features.
Figure 11. Spider plot achieved with the classification results of FLAIR- and T2-modality MRI. (a) Performance measure for FLAIR-modality dual deep features, (b) performance measure for FLAIR-modality DL + ML features, (c) Performance measure for T2-modality dual deep features, (d) Performance measure for T2 modality DL + ML features.
Diagnostics 13 01832 g011aDiagnostics 13 01832 g011b
Table 1. Summary of MRI-based BT detection implemented with the DL scheme.
Table 1. Summary of MRI-based BT detection implemented with the DL scheme.
ReferencesProcedure EmployedAccuracy (%)
Kalaiselvi et al. [24]Convolutional-neural-network (CNN)-supported examination of BT in the BRATS database.99.00
Raja [25]The implementation of a deep autoencoder along Bayesian fuzzy-clustering segmentation is discussed to detect BT in the BRATS database.98.50
Amin et al. [26]Detection of BT in BRATS is presented using stacked autoencoders.98.00
Özyurt and Avcı [27]This work implements fuzzy c-means-based superpixel detection and CNN with an extreme-learning machine to detect BT in a TCIA dataset.98.33
Anilkumar and Kumar [28]BT in the BRATS database is assessed using transfer learning and KNN classification.97.28
Sharif et al. [29]Deep-transfer-learning-supported segmentation and classification are performed using MRI slices from BRATS.92.00
Han et al. [30]Data augmentation and classification of MRI slices from BRATS are performed using the CNN approach.91.00
Amin et al. [31]Transfer learning with score-level fusion to detect BT in MRI slices from the BRATS database.99.00
Siar and Teshnehlab [32]Integrated DL and ML approaches are presented to detect BT in MRI slices from the BRATS database.87.00
Krishnammal and Raja [33]Employment of CNN-based classification and BT-severity detection is performed using BRATS.98.00
Ezhilarasi and Varalakshmi [34]R-CNN scheme-based detection of BT from the BRATS database is discussed.97.50
Antony et al. [35]Automatic detection of BT using BRATS and CNN is presented.97.00
Pandian and Balasubramanian [36]Implementation of content-based image retrieval is discussed using TCIA brain-MRI slices.88.00
Gudigar et al. [37]Cascaded autoencoder-based feature fusion and binary classification are implemented to detect BT in T2-modality MRI slices from TCIA.96.70
Demir et al. [38]A novel CNN scheme is implemented to examine multi-modality brain MRIs from BRATS.99.29
Qureshi et al. [39]Deep-learning radiomic-feature-extraction-based automatic detection of brain MRI is proposed for the BRATS database.96.84
Shelatkar et al. [40]Automatic examination of a tumor in an MRI slice with a lightweight deep-learning scheme.-
Table 2. The MRI slices considered in this work to verify the performance.
Table 2. The MRI slices considered in this work to verify the performance.
ImageDimensionsTotalTrainingValidationTesting
Class1224 × 224 × 315001200150150
Class2224 × 224 × 315001200150150
Table 3. Performance metrics computed during DL-feature-based classification with SoftMax.
Table 3. Performance metrics computed during DL-feature-based classification with SoftMax.
BTSchemeTPFNTNFPACPRSESPF1S
LGG/
HGG
VGG16139101381392.333391.447493.288691.390792.3588
DenseNet101135141401191.666792.465890.604092.715291.5254
ResNet101136131341790.000088.888991.275288.741790.0662
VGG19133191361289.666791.724187.500091.891989.5623
ResNet50133171351589.333389.864988.666790.000089.2617
LGG/
GBM
VGG16138141381092.000093.243290.789593.243292.0000
DenseNet101136131381391.333391.275291.275291.390791.2752
VGG19134191361190.000092.413887.581792.517089.9329
ResNet101131181371489.333390.344887.919590.728589.1156
ResNet50132171351689.000089.189288.590689.404088.8889
Table 4. Metrics achieved with dual deep and fused features for FLAIR-modality MRI.
Table 4. Metrics achieved with dual deep and fused features for FLAIR-modality MRI.
FeaturesClassifiersTPFNTNFPACPRSESPF1S
Dual DeepSoftMax1435146696.333395.973296.621696.052696.2963
DT1466145397.000097.986696.052697.973097.0100
RF1447145496.333397.297395.364297.315496.3211
KNN1454144796.333395.394797.315495.364296.3455
SVM1447146396.666797.959295.364297.986696.6443
DL + MLSoftMax1462148498.000097.333398.648697.368497.9866
DT1483144597.333396.732098.013296.644397.3684
RF1501147299.000098.684299.337798.657799.0099
KNN1511147199.333399.342199.342199.324399.3421
SVM1474148198.333399.324397.351099.328998.3278
Table 5. Metrics achieved with dual deep and fused features for T2-modality MRI.
Table 5. Metrics achieved with dual deep and fused features for T2-modality MRI.
FeaturesClassifiersTPFNTNFPACPRSESPF1S
Dual-DeepSoftMax1426145795.666795.302095.945995.394795.6229
DT1446145596.333396.644396.000096.666796.3211
RF1439146296.333398.620794.078998.648696.2963
KNN1452144996.333394.155898.639594.117696.3455
SVM1447146396.666797.959295.364297.986696.6443
DL + MLSoftMax1465147297.666798.648696.688798.657797.6589
DT1461150398.666797.986699.319798.039298.6486
RF1463148398.000097.986697.986698.013297.9866
KNN1472149298.666798.657798.657798.675598.6577
SVM1521147099.666710099.346410099.6721
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rajinikanth, V.; Vincent, P.M.D.R.; Gnanaprakasam, C.N.; Srinivasan, K.; Chang, C.-Y. Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features. Diagnostics 2023, 13, 1832. https://doi.org/10.3390/diagnostics13111832

AMA Style

Rajinikanth V, Vincent PMDR, Gnanaprakasam CN, Srinivasan K, Chang C-Y. Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features. Diagnostics. 2023; 13(11):1832. https://doi.org/10.3390/diagnostics13111832

Chicago/Turabian Style

Rajinikanth, Venkatesan, P. M. Durai Raj Vincent, C. N. Gnanaprakasam, Kathiravan Srinivasan, and Chuan-Yu Chang. 2023. "Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features" Diagnostics 13, no. 11: 1832. https://doi.org/10.3390/diagnostics13111832

APA Style

Rajinikanth, V., Vincent, P. M. D. R., Gnanaprakasam, C. N., Srinivasan, K., & Chang, C. -Y. (2023). Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features. Diagnostics, 13(11), 1832. https://doi.org/10.3390/diagnostics13111832

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop