Next Article in Journal
Rock Breaking Performance of TBM Disc Cutter Assisted by High-Pressure Water Jet
Next Article in Special Issue
Intravoxel Incoherent Motion Model of Diffusion Weighted Imaging and Diffusion Kurtosis Imaging in Differentiating of Local Colorectal Cancer Recurrence from Scar/Fibrosis Tissue by Multivariate Logistic Regression Analysis
Previous Article in Journal
Environmental Disinfection Strategies to Prevent Indirect Transmission of SARS-CoV2 in Healthcare Settings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features

by
Gökalp Çinarer
1,*,
Bülent Gürsel Emiroğlu
2 and
Ahmet Haşim Yurttakal
1
1
Computer Technologies Department, Yozgat Bozok University, 66100 Yozgat, Turkey
2
Computer Engineering Department, Kırıkkale University, 71450 Kırıkkale, Turkey
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(18), 6296; https://doi.org/10.3390/app10186296
Submission received: 17 July 2020 / Revised: 2 September 2020 / Accepted: 8 September 2020 / Published: 10 September 2020
(This article belongs to the Special Issue Artificial Intelligence and Radiomic Analysis in Medicine)

Abstract

:
Gliomas are the most common primary brain tumors. They are classified into 4 grades (Grade I–II-III–IV) according to the guidelines of the World Health Organization (WHO). The accurate grading of gliomas has clinical significance for planning prognostic treatments, pre-diagnosis, monitoring and administration of chemotherapy. The purpose of this study is to develop a deep learning-based classification method using radiomic features of brain tumor glioma grades with deep neural network (DNN). The classifier was combined with the discrete wavelet transform (DWT) the powerful feature extraction tool. This study primarily focuses on the four main aspects of the radiomic workflow, namely tumor segmentation, feature extraction, analysis, and classification. We evaluated data from 121 patients with brain tumors (Grade II, n = 77; Grade III, n = 44) from The Cancer Imaging Archive, and 744 radiomic features were obtained by applying low sub-band and high sub-band 3D wavelet transform filters to the 3D tumor images. Quantitative values were statistically analyzed with MannWhitney U tests and 126 radiomic features with significant statistical properties were selected in eight different wavelet filters. Classification performances of 3D wavelet transform filter groups were measured using accuracy, sensitivity, F1 score, and specificity values using the deep learning classifier model. The proposed model was highly effective in grading gliomas with 96.15% accuracy, 94.12% precision, 100% recall, 96.97% F1 score, and 98.75% Area under the ROC curve. As a result, deep learning and feature selection techniques with wavelet transform filters can be accurately applied using the proposed method in glioma grade classification.

1. Introduction

Gliomas are primary malignant tumors that are common in the brain, having a high relapse rate and high mortality. According to the data of the American Cancer Society, 23,880 people were diagnosed with malignant brain and spinal cord tumors in 2018 and 70% of those diagnosed with malignant tumors died [1]. Gliomas are usually classified in a range of grades from I to IV. According to the classification of the World Health Organization (WHO), gliomas can be subdivided by their malignancy from Grade II (lower grade) to Grade IV (high grade) [2]. Gliomas, hypoyphosis tumors and meningiomas are among the primary brain tumors [3]. The WHO categorization includes low-grade gliomas (LGGs), diffuse low-grade (Grade II) and medium-grade gliomas (Grade III), and tumors with highly variable behaviors whose textural structures are unpredictable. In addition, according to the WHO, low-grade gliomas are infiltrative neoplasms that usually contain low and medium-grade gliomas (Grade II and Grade III) [4]. Grade II and Grade III brain tumors can have types of astrocytoma, oligoastrocytoma, oligodendrioglioma. These tumors can be in Grade II and Grade III groups, therefore, examining brain tumor types in the right classes will facilitate the treatment of brain cancers. Astrocytomas can be low grade (Grade II) and high grade (Grade III). Low grade astrocytomas grow slowly in a certain area. High-grade astrocytomas grow rapidly and require different treatment methods [5]. The tissues of Grade II tumors are malignant and these cells do not look very much like normal cells. Grade III tumors are malignant tissue cells and these cells also grow actively [5]. Classifying Grade II and Grade III tumors, both cellular and MRI images, is quite complex and clinically demanding.
In the treatment of gliomas, the desired level of progress has not been achieved yet. Accurate grading of the tumor area in the treatment of gliomas is very important. Classical surgery is risky in cases of tumors located in deep and critical areas of the brain. In such cases, it is possible to identify the grade of the tumor and to plan the next treatment accordingly by performing a safer and high-tech stereotaxic biopsy, histopathological examination and then remove the tumor with a smaller surgery. Knowing whether the tumor is benign or malignant and determining the grade of the tumor allows the patient to be better informed about the treatment method to be applied.
Agnostic radiomic features are quantitative descriptors that extract the radiomic features of medical images using multiple data characterization algorithms and investigate the relationship between images and underlying genetic features [6]. The multivariate radiomic features that are automatically revealed have promising effects in the identification of prognostic factors that provide information about the clinical course at the time of diagnosis, regardless of treatment [7]. Radiomics is a method of obtaining many quantitative parameters of radiological images to analyze the texture and complex structure of lesions [8]. Radiological imaging is an important step used for the localization of the tumor, preliminary diagnosis, surgical planning, determination of the amount of postoperative tumor resection, and planning the treatment and follow-up in patients with glioma.
In recent years, radiomics has developed as a method that detects medical image properties obtained by imaging methods, such as shapes, first-order features, textures, or wavelet properties, without requiring surgical procedures [9]. Several studies have used various feature selection methods to estimate and classify glioma grades with accuracy [10,11], texture [12], and MRI radiomic features [13,14,15]. As these prominent methods show, diagnostic accuracy will increase with the quantitative evaluation of the classical information (structural, functional, etc.) of MR images by advanced image analysis techniques such as morphological and texture analysis [16].
The contents of these studies include machine learning and deep learning algorithms integrated with the quantitative features of the image texture, also called radiomics. However, the number of studies in this area is limited and the correct determination of tumor grades will facilitate the application of important findings, such as in pre-diagnosis, surgical treatment strategies, tumor treatment processes, and tumor localization. Based on these considerations, we have proposed a system that aims to accurately determine the grades of brain tumors using multi-parameter analysis of wavelet radiomic features. In addition, we classify brain tumor grades according to 3D wavelet filter groups with a deep neural network (DNN)-based model.
The sections of the study proceed as follows. In the second part of paper is literature review. In Section 3, extracting related features and classification processes with Deep Neural Network (DNN). In Section 4, the results of the tests are shown in detail and evaluated comparatively with similar studies. In Section 5 The advantages and disadvantages of the proposed model are discussed. Finally, conclusion and future works presented in Section 6.

2. Related Work

In this section, current studies on brain cancer diagnosis and classification using deep learning and machine learning techniques were reviewed. Ramteke and Monali [17] used the nearest neighbors classifier as a classification algorithm for the statistical tissue properties of normal and malignant brain magnetic resonance imaging (MRI) findings, achieving an 80% classification rate. Similarly, Gadpayleand and Mahajani [18] classified normal and malignant brain MR images according to tissue properties and achieved 72.5% accuracy with a neural network classifier. Ghosh and Bandyopadhyay [19], using the fuzzy C-means clustering algorithm with patient MRI images, detected different tumor types in the brain and other areas related to the brain with 89.2% accuracy. Abidin et al. [20] used the AdaBoost classification algorithm to detect metastasis and glioblastoma tumors and obtained 0.71 accuracy. George et al. evaluated normal and abnormal brain tumors with the C4.5 decision tree and multilayer perceptron (MLP) machine learning algorithms according to their radiomic shape features [21]. Bahadure et al. [22] examined accuracy, sensitivity, and specificity using the wavelet segmentation and feature extraction methods for brain MRI images. In the study performed by Nabizadeh et al., Gabor wavelet transform was used to extract features of tumor areas in MR images and compared the performances of different classifiers [23]. Hsieh et al. classified glioblastoma and low-grade gliomas with accuracy of 0.88 using a logistic regression algorithm with brain tumor radiomic features [13]. There are many alternative classification methods to be investigated in radiographic analysis, such as logistic regression, naive Bayes classifier, nearest neighbor, and decision trees [24].

3. Materials and Methods

We have designed a model that aims to classify tumor grades correctly. The workflow consists of 5 steps. These are tumor segmentation, feature extraction, statistical analysis, feature selection and classification. The brain tumor regions were evaluated for the Region Of Interest (ROI) interactively. The size of the ROI should be large enough to accurately capture the tissue information, thus revealing statistical significance. It should also be noted that the size of the ROI may depend on the MRI acquisition parameters. Using a 200 × 200 pixel ROI in an image with a resolution of 2.5 × 2.5 mm2 is not the same as for an image of 0.7 × 0.7 mm2. Generally, the tumor region found in brain MRI images has a very small proportion in the general image. This situation makes the detection of the tumor difficult. For this purpose, probable tumor region was taken into a convex area manually with ROI. In applications, the multi-gene region that is not more concave than rectangular is determined manually. As a result of this process, the part outside the ROI region will be ignored, so higher success will be achieved. For texture analysis, regions of interest (ROIs) for each MR imaging set of each patient were manually drawn slice-by-slice in the axial plane for each of the available sequences.
In next step, ROIs were segmented with the GrowCut segmentation algorithm [25]. Later, 3D wavelet transform filters were used to detect radiomic features of tissue properties from multispectral layers. Quantitative parameters of textural features and first-order features were obtained using the radiomic feature extraction method. The statistical significance of the data was tested with the Mann-Whitney U non-parametric test. Radiomic features resulting from the radiomic feature extraction process were classified by the DNN. Figure 1 shows the block diagram of the proposed system.

3.1. Dataset

The Cancer Imaging Archive (TCIA) is a popular worldwide portal that provides full open access to medical images for cancer research. MRI data of the patients in this study were obtained from TCIA (http://cancerimagingarchive.net/) of the National Cancer Institute [26]. TCIA is a database that allows the use of MRI medical images of various types of cancer in academic studies and research.
All materials and images included in the LGG-1p/19q deletion dataset have been used in accordance with the rules, guidelines, and licensing policies regarding patient protection [27,28]. 121 patients whose brain tumors are Grade II and Grade III proven by biopsy results were used in the study. T1-weighted, T2-weighted and Fluid-Attenuated Inversion Recovery (FLAIR) sequences of each patient were examined. The choice of MRI sequence in radiomic feature determination depends on the application. Contrast-enhanced T1-weighted images have been used in tissue analysis and segmentation in the literature [29]. T2-weighted images have been used to classify benign and malignant tumors [30]. The tissue properties of the MRI images of each patient are different, so a definitive assessment cannot be made as to which MRI sequence is better. T2-weighted images and FLAIR images were used in this study. Grade II, n = 77 and Grade III, n = 44 patient from the LGG-1p/19q deletion dataset were used in this study. Examples of Grade II and Grade III original gliomas used in the study are shown in Figure 2.

3.2. GrowCut Segmentation

Image segmentation is the process of dividing an image into meaningful regions where different properties are labeled for each pixel. The methods designed for image segmentation and their performances vary depending on the image type, application method, size, and color intensity. Automatic image segmentation is one of the most difficult operations of image processing. Various algorithms based on MRI have been proposed for automatic glioma segmentation. For example, a segmentation method using fuzzy clustering techniques [31], a morphological edge detection method [32], a segmentation method based on tumor tissue pixel densities [33], and a graph-based segmentation method for glioblastoma tumors [34] have been tried. However, many algorithms, such as neural network methods, morphological methods, clustering methods, and Gaussian models, have also been used for the segmentation of brain tissues. The main purpose of these algorithms is to identify the tumor area quickly, steadily, consistently, and accurately.
The GrowCut segmentation method is proposed here, which was introduced to the literature by Vezhnevets and Konouchine [25] as a cellular automata algorithm with a label in each cell developed for the multilayer segmentation of 2D or 3D images. The GrowCut algorithm automatically calculates an area of scribbles, taking into account user input blackouts. GrowCut tagging was done by starting from the core and background pixels that we manually drew with the paint effect tool. The process is complete when the algorithm has tagged all the pixels in the ROI that we have drawn manually. Labeling was done by calculating the weight of individual pixels with their neighbors. The algorithm worked with the basic logic of leaving a weighted neighbor tag larger than the power of the given pixel. Since all pixels must be visited each time in the GrowCut algorithm, time is lost. To prevent this, ROI processing was done manually. Expert radiologist support was received during this process. The regions that were exposed as a result of the segmentation were filled manually. Automated tumor segmentation was performed after plotting the axial, sagittal, and coronal positions of the tumor images according to the segmentation information in the dataset with the GrowCut algorithm after establishing the ROI. Figure 3 shows the segmentation procedures.

3.3. Wavelet Transform

Wavelet transform is an important algorithm used in the analysis of MR images [35]. The wavelet transform was applied by discrete wavelet transform with a total of eight filters with four high pass and four low pass. The discrete wavelet transform works on the principle of sub-band coding. Due to its various features and characteristics, it is a good technique for image processing by using mother wavelets. Wavelet reconstruction is a process in which useful image features are selected that extract an image’s radiomic features from each sub-part and then detect the discriminatory aspects of these data in visual analysis. This model creates a geometric localization exchange with narrow high-pass and wide low-pass filters and is widely preferred [36]. In previous studies [37,38], 2D wavelets were used for filtering images, but in the present work, 3D wavelet transform is used for extracting radiomic features. The fact that brain tumors have thin tissue also reflects positively on the classification results of high-pass filters. Radiomic features can be determined on the original brain tumor image or using the filter option. There are many filters used to achieve radiomic features. These include wavelet and Laplacian of Gauss (LoG) filters, and many filters, including square root, logarithm, square, and exponential filters. In addition to the original image, we also used wavelet-transformed images in extracting texture features. In our 3D wavelet technique (x, y, z), sub-volumes are created on three-dimensional spatial axes, resulting in an eight-piece image volume. The 3D volume is filtered on the x-axis and a high-pass and low-pass image is obtained, H (x, y, z) and L (x, y, z). This process is then repeated on the y-axis and four sub-bands (HL, HH, LL, LH) appear. On the third axis (z), these four volumes are filtered into a total of eight sub-bands. Thus, high-pass filters (HLL, HLH, HHL, HHH) and low-pass filters (LLL, LLH, LHL, LHH) are formed. Often, fine texture is obtained from details (i.e., high-pass filters), while coarse texture is obtained from approximate values (i.e., low-pass filters). The decomposition process is a view called wavelet analysis filterbank, which is performed by wrapping the mother wavelets in a single down-sampling step in each direction [39]. We tried various filter families for wavelet decomposition including Daubechies, Symlets, Coiflets, Haar. Haar is a square function made up of two coefficients, a mother wavelet with both orthogonality and symmetry properties. The biggest drawback is that it is not smooth [40]. Daubechies wavelets are developed by looking at Haar wavelets. It has a larger number of coefficients, but they are not synchronous [41]. Symmetrical main wavelets such as Symlets, Biorthogonal were developed over time. The common feature of these wavelets is that although they have less verticality, they have high smoothness [42]. The Daubechies Wavelet transform is implemented through a series of decompositions, such as the Haar transform. The only difference is that the filter length is more than two. Therefore, it is more local and smoother [43]. Coiflets wavelets developed based on Daubechies [44]. Although there are many mother wavelets, the dependence of the radiomic prognosis prediction to maternal wavelet in terms of patient survival is still not investigated [45]. Daubechies wavelet filter is used in the study. Feature extraction process was started to determine the wavelet filter, which achieved high accuracy in the classification of Grade II and Grade III tumors.

3.4. Radiomic Feature Extraction

Radiomics is the process of quantitatively identifying all the properties of tumor volume. Radiomic features are examined in two categories: semantic (size, shape, location, etc.) and agnostic (first-order features, texture, wavelet, etc.) [46]. Agnostic radiomic features of tumor images were extracted using open-source PyRadiomics python package [47]. PyRadiomics enables the processing and extraction of radiographic features from medical image data.
We examined the effectiveness of 3D wavelet radiomic features (LLH, LHL, LHH, LLL, HLL, HLH, HHL, HHH) belonging to six different matrices in brain cancer grade estimation. These matrices are as follows:
  • First Order (FIRST ORDER): Describes the individual values of voxels obtained as a result of ROI cropping. These are generally histogram-based properties (energy, entropy, kurtosis, skewness).
  • Gray Level Co-occurrence Matrix (GLCM): Calculates how often the same and similar pixel values come together in an image and takes statistical measurements according to this matrix. These resulting values numerically characterize the texture of the image [48].
  • Gray Level Run Length Matrix (GLRLM): Defined as the number of homogeneous consecutive pixels with the same gray tone and quantifies the gray-level studies [49].
  • Gray Level Size Zone Matrix (GLSZM): Properties based on this matrix assign voxel counts according to the logic of measuring gray-level regions in an image.
  • Neighboring Gray Tone Difference Matrix (NGTDM): Digitization of textures obtained from filtered images and their fractal properties. Mathematical definitions of these properties are evaluated independently of the imaging method.
  • Gray Level Dependence Matrix (GLDM): Number of bound voxels at x distance from the central voxel.
A total of 144 first-order features and 600 texture features, or a total of 744 3D wavelet features, were extracted with 3D Slicer. We applied a statistical analysis method to select the strongest features.

3.5. Statistical Analysis and Feature Selection

The fact that there are too many features in a study extends the calculation time while the absence of a semantic relationship between the features reduces the classification accuracy. For this reason, feature selection has been applied. There are many methods such as Principal Component Analysis (PCA) and sequential forward selection (SFS) analysis to extract and select the most suitable feature set. The PCA method is a mathematical data mining method that facilitates the determination of the components of the data carrying the most information and reduces the data in this way. The PCA method is used to determine the most fundamental factor based on the relationships between variables, but the purpose of our study is to determine the variables that differentiate Grade II and Grade III groups from each other. Therefore, it is important that 126 variables out of 744 variables reveal significant differences between Grade II and Grade III. In the study, while extracting the radiomic feature, it is not aimed to define the properties in a single variable. Therefore, Statistical analysis of data on radiomic features was done with the Statistical Package for the Social Science (SPSS) statistical computer program [50]. Texture feature extraction is based on statistical distributions. We analyzed first-order features (Global) and texture features (GLCM, GLDM, GLRLM, GLSZM, and NGTDM). It is useful software that is widely preferred in medical studies as well as social sciences. We performed univariate analysis based on the Mann-Whitney U test of significance for comparisons between grades and multiscale texture types (i.e., LLL, HLL, LHL, HHL, LLH, HLH, LHH, and HHH). Since the data of 121 patients with Grade II (n = 77) and Grade III (n = 44) were not normally distributed. Whether there are missing values and outlier values is analyzed with SPSS program. “Kurtosis and Skewness” values are checked to analyze whether the data have a normal distribution or not [51]. According to the literature, it is stated that these values have a normal distribution if they are in the range of ±3 or ±2 [52]. Accordingly, it was seen that the data did not have normal distribution, missing value and outlier. In line with these results the statistical significance of the data was tested with the non-parametric test, MannWhitney U. We detected the radiomic features that created significant differences between Grade II and Grade III. Spearman correlation analysis was used because variables were obtained by proportional or intermittent scales, but did not conform to normal distribution. The p-value indicates the amount of possible error that we will have when we encounter a statistically significant difference in a comparison. According to the HolmBonferroni method, a correlation value of 0.3 to 0.4 means a medium correlation and p < 0.05 is considered statistically significant. A total of 126 radiomic features with values of p < 0.05 were found statistically significant according to agnostic radiomic features. The distribution of 126 features in eight different wavelet filters is 4 low filter band LLH 18 radiomic features, LHL 19 radiomic features, LHH 14 radiomic features, LLL 20 radiomic features and 4 high filter band HLL 11 radiomic features, HLH 19 radiomic features, HHL 13 radiomic features and HHH 12 has radiomic features.

3.6. Deep Neural Network (DNN) Model

Deep learning extends traditional neural networks by adding hidden layers to network architectures between input and output layers while modeling more complex and nonlinear situations [53]. This situation has attracted the attention of researchers with its performance in many areas such as image processing, classification, and medical image analysis. Deep learning has been used frequently in recent years in medical imaging and computer-aided radiological research. The increasing availability of medical image data as well as the increasing data processing speeds of computers have had a major impact on this [54].
There are various DL architectures, among which Convolutional Neural Networks (CNNs) have been widely used in recent years. CNN architecture does not require feature extraction before implementation. In the other side training CNN from scratch is time consuming and difficult. Before the model is ready, it needs a very large data set for compilation and training [55]. In this case, it is not correct to apply CNN in every dataset. Our proposed methodology is based on the DNN learning architecture for classifying brain tumors as Grade II and Grade III. In this way, radiomic features were obtained in different categories. In such a way, high success is achieved in many problems, including image processing [56], video processing [57], and speech recognition [58]. As with the model applied in this study, using radiomic features in the same architecture with deep learning algorithms can improve outcome estimation.
High classification results have been obtained in deep learning studies with fewer than 100 patients recently [59,60,61]. Considering all these criteria, this study was designed with the open source H2O Python module, which is a feedforward DNN trained using backpropagation [62]. The most important parameters in the H2O deep learning module are the number of periods, the number of hidden layers and the number of neurons in the hidden layer. For nonlinear problems, it has been suggested to start with two hidden layers. The larger the size of the hidden layer, the easier it will be to learn. But after a certain point the success rate will probably decrease [63]. For example, Liu and Chen showed in their study that the network structure of 400 × 400 × 400 reached lower error rates than similar network structures [64].
Different parameters and layers were used to increase network robustness. The deep learning architecture used comprises 200 × 200 hidden layers. While the activation function used in the hidden layers is Rectifier Linear Unit (ReLu), the activation function used in the output layer is Softmax.
The DNN hyper parameters are shown in Table 1. Stochastic gradient descent (SGD) was used as the optimizer. The epsilon value was 10−8. Moreover, elastic regularization was performed to prevent over-fitting. Five-fold cross-validation was performed within the framework of training, the Bernoulli distribution process was performed for the dataset, and the values were calculated.
The performance of both groups was measured using ROC curves and the accuracy, sensitivity, specificity, and area under the ROC curve (AUC) values. Selected radiomic features of each transform band were classified. Accuracy levels of Grade II and Grade III tumors were examined comparatively according to wavelet groups. The Computational Complexity of the Classification Method is presented in the Table 2.

4. Results

4.1. Data Exploration

Grade II and Grade III MRI brain tumors of 121 patients were segmented by the GrowCut algorithm. 744 radiomic features of the 3D images were detected in low sub-band and high sub-band filters with Wavelet Transform method. Since the data of 121 patients were not normally distributed, the statistical significance of the data was tested with the MannWhitney U, which is a non-parametric test. As a result of the MannWhitney U test, 126 radiomic features with significant statistical properties were determined.
The feature distribution of 8 sub-band groups is as follows. For LLH 18 radiomic features, for LHL 19 radiomic features, for LHH 14 radiomic features, for LLL 20 radiomic features, for HLL 11 radiomic features, for HLH 19 radiomic features, for HHL 13 radiomic features, and for HHH 12 radiomic features were selected with the non-parametric MannWhitney U test. A total of 126 radiomic features, namely first-order features (n = 14) and texture features (n = 112).
First-order features show that the Energy feature creates a statistically significant difference between Grade II and Grade III. In the HLL, HHL, LLH, LHH, and HHH filter groups, the radiomic features of the GLCM matrix group did not show a statistically significant difference between Grade II and Grade III. Radiomic features belonging to the GLSZM matrix group in the LLL and HLH filter groups showed the highest statistically significant difference between Grade II and Grade III.
Among the 3D wavelet low-pass filter groups (LLH, LHL, LHH, LLL), the maximum statistically significant difference belonged to 20 radiomic features, especially in the LLL filter group. On the other hand, the maximum statistically significant difference among the high-pass filter groups belonged to 19 radiomic features, especially in the HLH filter group. The statistically significant differences of selected wavelet first-order features and texture features of Grade II and Grade III tumor images as evaluated by the MannWhitney U test are provided in Appendix A.
First-order features for LLH show that Energy (p = 0.004) and Total Energy (p = 0.007) create a statistically significant difference between Grade II and Grade III. There was no significant difference between GLCM features and tumor grades for LLH.
We found 5 statistically significantly different features for LLH in GLDM, 5 features for LLH in GLRLM, 5 features for LLH in GLSZM, and 1 feature for LLH in NGTDM between Grade II and Grade III (p < 0.05).
For the LLH GLSZM and LHL GLSZM groups, Gray Level Non-Uniformity differed the most significantly (p = 0.000) between Grade II and Grade III. Two statistically significantly different features for LHL GLCM, 5 features for LHL GLDM, 4 features for LHL GLRLM, 5 features for LHL GLSZM, and 1 feature for LHL NGTDM were determined between Grade II and Grade III (p < 0.05).
Coarseness, Energy, and Total Energy differed significantly between Grades II and III (p < 0.005) in all wavelet filter groups (LLH, LHL, LHH, LLL, HLL, HLH, HHL, HHH). Only one radiomic feature, Dependence Non-Uniformity (p = 0.000), differed significantly between Grade II and Grade III in all filter groups. Correlation is frequently used in the statistical data analysis of radiomic features of medical images. In this study, the correlation between selected radiomic features belonging to each wavelet filter group is examined. Figure 4 shows the correlation graphs of wavelet filter groups. Correlation matrices of wavelet groups were obtained with Python.
A statistically significant and strong relationship was observed between the GLDM Dependence Non-Uniformity (w3-HLL) and the GLSZM Gray Level Non-Uniformity (w8-HLL) features at r = 0.9 and p < 0.001. There was a negative statistical relationship between the NGTDM Coarseness (w11-HLL) and Gray Level Non-Uniformity (w6-HLL) in the w-HLL group at r = −0.5 and p < 0.001.
At the same time, the radiomic features of the HHH group revealed a statistically significant and strong relationship between first-order Total Energy and GLSZM Size Zone Non-Uniformity features (r = 0.95).
There was a strong negative statistical relationship between the GLCM Idmn (w3-LHL) and Coarseness (w19-LHL), with a value of r = −0.6.
A statistically high correlation (r = 0.90) was observed between GLSZM Low Gray Level Zone Emphasis and GLRLM Long Run Low Gray Level Emphasis, GLDM Low Gray Level Emphasis, and GLRLM Low Gray Level Run Emphasis.

4.2. Performance Evaluation

Classification was performed with the H2O controlled learning model, and 60% of the dataset was randomly reserved for training while 20% was reserved for validation and 20% for testing. Dataset splitting was done before the training phase. Table 3 shows the number of patients to be classified according to these groups.
The classification was implemented in an open source Python environment. After the training phase with the DNN model, it was tested on a dataset that had never been used in training to provide an unbiased assessment. Figure 5 shows the confusion matrices resulting from classification in all wavelet sub-bands.
Table 4 presents the confusion matrix structure. Equations (1)–(4) show the formulas for the evaluated performance metrics.
Accuracy: The most commonly used classification accuracy to determine classification performances is used to measure the overall effectiveness of the classifier.
Accuracy = tp + tn tp + tn + fp + fn
Precision: The term precision is used in describing the agreement of a set of results among themselves. Precision is usually expressed in terms of the deviation of a set of results from the arithmetic mean of the set.
Precision = tp tp + fp
Recall: Recall analysis determines how different values of an independent variable affect a particular dependent variable under a set of assumptions. This process is used within certain limits that depend on one or more input variables.
Recall = tp tp + fn
F1 Score: In order to make a sound decision about classifier performance, results other than classification accuracy should also be evaluated. The F1 score calculated for this purpose measures the relationship between the positive information of the data and those given by the classifier.
F 1   score = 2 tp 2 tp + fp + fn
The classification success of the model was evaluated with precision, recall, and F1 metrics. These performance metrics were obtained through the confusion matrix. Table 5 shows the classification performance statistics.
When the test results were examined, accuracy of 96.15% was obtained for w-HHH compared to 84.62% for w-HHL, 73.08% for w-HLH, 80.77% for w-HLL, 84.62% for w-LHH, 76.92% for w-LHL, 84.62% for w-LLH, and 80.77% for w-LLL. Thus, w-HHH has higher accuracy (96.15%) when compared with all other 3D wavelet transform bands. The area under the ROC curve (AUC) is also generally examined [65]. When AUC values are analyzed, w-HHH has the highest AUC value of 98.75%. Figure 6 shows the ROC curve for w-HHH filter classification.

5. Discussion

Deep learning and radiomic applications are used together in medical image processing. Accurate analysis and application of these two key methods has the potential to revolutionize radiology altogether and open up a new field in medical imaging. To accomplish this, the right combination of radiomic analysis and deep learning method is very important.
The values of the numerical results obtained from the segmentation and pretreatment of the image affect the classification findings. One of the most important steps in detecting tumor degrees in datasets is radiomic feature extraction and the radiomic feature selection methods that are applied when creating a model. In a deep convolutional encoder-decoder study, accuracy was calculated by removing 90 radiomic features from 83 brain sub-regions using only 2D slice images [66]. However, in our study, 3D image information was utilized considering the z-axis. The number of radiomic features of the images that we obtained is high and the tumor regions were segmented in three dimensions. Using 3D images in feature extraction provided an important advantage. Additionally, the purpose of feature extraction is to find as many features as possible by defining the collected data by performing different operations. In previous glioma grading studies [67,68,69], the known ROC method was used to analyze the relationship between parametric values and glioma grades. Just by looking at that information, it is very difficult to determine which parameters and properties can be best used in glioma grading. Some radiomic features selected in previous studies were thought to help in the separation of different grades of gliomas, while others were not significantly correlated with glioma grades [70,71]. Therefore, it will be more useful to find out the most effective method by trying comprehensive parametric combinations in different groups, not a single parameter. There are many researchers in the literature who do segmentation studies. Rundo et al. [72] have studied gross tumor volume (GTV) segmentation. During the treatment planning phase, GTV is often shaped by experienced neurosurgeons and radiologists on MR images using purely manual segmentation procedures. They proposed a semi-automatic seeded image segmentation method in their study. The GTVCUT segmentation approach gave successful results when heterogeneous tumors containing diffused internal necrotic material or cysts were processed. Sompong and Wongthanavasu to improve brain tumor segmentation they proposed a framework consisting of two paradigms, image transformation and segmentation [73]. Their study focused on the segmentation of edema points of T2w MRI images. The Tumor-Cut algorithm developed in this study is faced with the robustness problem that leads to insufficient segmentation in seed growth. Therefore, GrowCut algorithm was used in segmentation. Rundo et al. proposed a fully automated multimodal segmentation approach to separate Biological Target Volume (BTV) and Gross Target Volume (GTV) from PET and MRI images using 19 metastatic brain tumors. The experimental results obtained showed that the GTV and BTV segmentations were statistically related (Spearman rank correlation coefficient: 0.898) but did not have a higher degree of similarity (mean Membrane Similarity Coefficient: 61.87 ± 14.64). On the other hand, only MRI images were used in our study for the segmentation process [74]. Accordingly, in our study, the most effective and significant parameters related to glioma grades in eight sub band and high band wavelet groups were run. As a result, different accuracy values were obtained in each group. In a study comparing the GrowCut algorithm with GraphCut, GrabCut, and Random Walker algorithms, it was reported that the GrowCut algorithm segmented images faster and with higher quality [25]. Some of the basic features that exist in the Growcut algorithm have been an important factor in choosing it for segmentation. (1) Natural handling of images, (2) Multi-label segmentation, (3) easy to use and apply, (4) providing user input opportunity [75]. In this study, 3D Slicer, a free open source software platform for biomedical research, and the GrowCut algorithm, integrated with 3D Slicer, were used to segment brain tumors.
This increased the originality and validity of the present study. Different parameters show statistical significance in each group and that is an indicator that a single parameter is not effective in glioma grading. It is now known that texture features and especially quantitative features are important variables in the field of radiomics. Kharrat et al. [76] used 2D wavelet transform and a spatial gray-level dependency matrix in feature extraction. In this study, 3D wavelet transform is used and the classification results of radiomic features obtained from eight different wavelet groups are evaluated.
The purpose of feature selection is to minimize the number of extracted features in the most meaningful way with the right methods. These features should firmly define the concepts hidden in the data. Over-selection of data can give excellent results when applied to the training data of the analysis, but when new data are presented to the algorithm for analysis, it is an important problem in machine learning and the performance of the model decreases. Our results showed that the First-order Energy, GLDM Dependence Non-Uniformity, GLSZM Gray Level Non-Uniformity, and NGTDM Coarseness features of 3D wavelet filters had high power in differentiating between Grade II and Grade III in each MRI sequence. Welch et al. identified and used the most stable radiomic properties (First-order Energy, Shape-Compactness, Texture-Gray Level Non-Uniformity (GLNU), and Wavelet HLH-GLNU) in their study [77].
In addition, it is very difficult for the human eye to detect and quantify some statistical features such as wavelet and first-order features [37]. Although automated tools significantly affect the process of identifying the ROI and reduce the variable results among radiologists, they are not yet fully and efficiently used. For this reason, ROI fields created by experts in the dataset have been tested here with different automatic segmentation tools and high accuracy has been achieved. In the classification of low-grade and high-grade glioma, authors stated that correlation with GLCM has the highest discrimination among texture features [78]. In our study, correlation, Idn, and Idmn GLCM matrix features also revealed significant differences between Grade II and Grade III. Radiomic features obtained as a result of radiomic analysis have a very important place in classification. In another study, the authors [79] used DWT in their study on brain tumor, but they performed wavelet feature selection using PCA. This is a mathematical operation that converts a set of semantically related variables into fewer unrelated variables called principal components. In our study, the radiomic feature extraction process and the similarity between Grade II and Grade III groups in each feature are examined. This process was carried out with the Mann-Whitney U nonparametric test. With the model we applied, the number of radiomic features was reduced according to the semantic relationship, in this way, high accuracy was obtained in classification. However, in the studies conducted with the PCA method, the algorithm restricts the radiomic features and collects them in a certain category. However, it is difficult to classify Grade II and Grade III gliomas with high accuracy since their radiomic features are similar.
Most of the methods developed in the literature are classified without using sufficient numbers of radiomic features. In a recent work by Dong et al., they selected and classified only 3 features out of a total of 321 radiomic features. They reached accuracy values of between 0.70 and 0.76 with a total of 5 classifiers [80]. Huang et al., studying 576 brain metastases of 161 patients, obtained 107 radiomic features and they determined 8 radiomic features with SPSS [81]. In another study, radiomic feature selection was applied and the radiomic features of Alzheimer patients were classified from brain images with accuracy of 91.5%, 83.1%, and 85.9%. As seen in these studies, the number of radiomic features used affects the accuracy rate. A total of 144 first-order features and 600 texture features, or a total of 744 3D wavelet features, were extracted with 3D Slicer, which is a very high number considering the literature.
Cho et al. [15] chose five radiomic features in glioma grading and three classifiers showed an average AUC value of 0.9400 for training groups and 0.9030 (logistic regression: 0.9010, support vector machine: 0.8866, and random forest: 0.9213) for test groups. In another study, 408 brain metastases were examined from 87 patients, 440 radiomic features were removed, and the AUC value was calculated with the random forest classifier [82]. In our study, a very high AUC result of 98.75% was achieved in the HHH wavelet group. Successful performance in radiomics depends on the type of data used. Support vector machines and logistic regression are useful in dividing cohorts into two different groups, such as good and bad prognosis, but they cannot provide detailed information when there are more variables. Another method commonly used in medical imaging in recent years is deep learning. Deep learning algorithms and radiomics are used together in medical image analysis, such as in image acquisition, segmentation, and classification in recent research. Increasing computer capacities and increasing medical imaging data allow this.
Khawaldeh et al. [83] classified the grades of glioma tumors using convolutional neural networks (CNNs). Their results showed reasonable performance in characterizing medical brain images with an accuracy of 91.16%. It is not always possible to obtain high accuracy in studies if too many features are obtained. In a deep learning-based study for brain tumor segmentation and survival prediction in glioma cases using the CNN architecture, 4524 radiomic features were extracted and 61% accuracy was achieved with the selected strong features [84]. We have classified brain tumors with DNN with high accuracy with the radiomic features that we obtained. The correct determination of radiomic features and the harmony of the parameters of the model have contributed significantly to these results. Sajjad et al. [85] used the VGG-19 CNN architecture to classify brain tumor grades (I, II, III, and IV) and achieved 0.90 accuracy and 0.96 average precision. CNN is disadvantageous for large images such as 256 × 256, in cases where it is necessary to process a large number of filters [86]. In our study, eight sub-band wavelet filters were used and radiomic properties were extracted from each one separately. The DNN can obtain the features automatically by learning and has been applied in computer vision successfully [87]. For this reason, DNN was used in model selection. Zia et al. [88] used the principal component analysis classifier for discrete wavelet transform, feature selection, and support vector machine for feature extraction. Three glioma grades were classified using 92 MR images. The proposed method achieved the highest accuracy of 88.26%, the highest sensitivity of 92.23%, and the highest specificity of 93.93%. Rathi and Palani applied “tumor or non-tumor” classification with a deep learning classifier and achieved 83% accuracy [89]. Although it is easier to differentiate that, the accuracy obtained was insufficient, which indicates the shortcomings of the applied method. In another study, the results showed reasonably good performance with a maximum classification accuracy of 92.86% for the Wndchrm DNN classifier [90]. Wndchrm is open source software that can be used for classification with feature extraction and selection processes in the biomedical field. The results obtained with this software are lower than those obtained with our feature selection method. The increase of open source artificial intelligence libraries is increasing the number of deep learning applications in the field of medical imaging. However, being able to interpret these deep learning systems correctly and running the correct methods in datasets requires significant expertise. While grading brain tumors, they were not limited to a single radiomic feature category. The first-order features, the morphological and wavelet features, and the properties of the matrix groups were all examined. This has provided a comprehensive acquisition of all macro and micro level features of the 3D tumor area. This contributed to the high accuracy achieved. More features and higher accuracy have been obtained compared to other studies in the literature that examine different categories of radiomic features at the same time. The adopted network structure classified Grade II and Grade III tumors with high accuracy with wavelet filters. The DNN architecture, statistical analysis process and segmentation method used in this are of great importance.
In addition, when we recognize that most of the features such as wavelet and first-order features cannot be identified and detected by the human eye, the medical contribution of this study is better understood. The method applied in this study achieved high success with w-HHH radiomic features with accuracy of 96.15%. As a result, the use of wavelet-based radiomic features in classification with deep learning methods has led to higher quantitative results.

6. Conclusions and Future Work

In this study, a DNN-based architecture is proposed for brain tumor classification. Glioma grades can be accurately determined by a combination of high-dimensional 3D imaging features, an advanced feature selection method, and a deep learning classifier. The proposed model was highly effective in grading gliomas with 96.15% accuracy, 94.12% precision, 100% recall, 96.97% F1 score, and 98.75% Area under the ROC curve on wavelet filters. The DNN model was used to detect the wavelet filter with the highest accuracy in the classification of Grade II and Grade III tumors.
We believe that the method applied in our study can contribute to highly efficient computer-aided diagnostic systems for gliomas. Magnetic resonance imaging maintains its effectiveness in clinical approach, as it is a non-invasive technique in this patient group. Deep learning and radiomic analysis methods will become an indispensable part of clinical support systems over time and will become markers of tumor grades of medical images in the future.
In the future studies, automatic ROI and segmentation processes will be applied directly from the image with Mask R-CNN. It will then be classified using state-of-the-art pre-trained transfer learning models.

Author Contributions

Conceptualization, G.Ç., B.G.E. and A.H.Y.; methodology, G.Ç., B.G.E. and A.H.Y.; software, G.Ç. validation, G.Ç., B.G.E. and A.H.Y.; formal analysis, G.Ç., B.G.E. and A.H.Y.; investigation, G.Ç., B.G.E. and A.H.Y.; resources, G.Ç., B.G.E. and A.H.Y.; data curation, G.Ç., B.G.E. and A.H.Y., writing-original draft preparation, G.Ç., B.G.E. and A.H.Y.; writing—review and editing, G.Ç., B.G.E. and A.H.Y.; visualization, G.Ç. supervision, B.G.E.; project administration, B.G.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In Table A1, the radiomic features considered in this study are listed.
Table A1. Radiomic features included in the study.
Table A1. Radiomic features included in the study.
Number/Wavelet GroupClassFeatureMann-Whitney U TestEvaluation
w1-LLHFirst OrderEnergyp = 0.004p < 0.05
w2-LLHFirst OrderTotalEnergyp = 0.007p < 0.05
w3-LLHGLDMDependenceNonUniformityp = 0.000p < 0.05
w4-LLHGLDMGrayLevelNonUniformityp = 0.009p < 0.05
w5-LLHGLDMLargeDependenceHighGrayLevelEmphasisp = 0.047p < 0.05
w6-LLHGLDMLowGrayLevelEmphasisp = 0.007p < 0.05
w7-LLHGLDMSmallDependenceLowGrayLevelEmphasisp = 0.000p < 0.05
w8-LLHGLRLMGrayLevelNonUniformityp = 0.005p < 0.05
w9-LLHGLRLMLongRunLowGrayLevelEmphasisp = 0.013p < 0.05
w10-LLHGLRLMLowGrayLevelRunEmphasisp = 0.007p < 0.05
w11-LLHGLRLMRunLengthNonUniformityp = 0.000p < 0.05
w12-LLHGLRLMShortRunLowGrayLevelEmphasisp = 0.005p < 0.05
w13-LLHGLSZMGrayLevelNonUniformityp = 0.000p < 0.05
w14-LLHGLSZMLowGrayLevelZoneEmphasisp = 0.006p < 0.05
w15-LLHGLSZMSizeZoneNonUniformityp = 0.000p < 0.05
w16-LLHGLSZMSmallAreaLowGrayLevelEmphasisp = 0.003p < 0.05
w17-LLHGLSZMZoneEntropyp = 0.037p < 0.05
w18-LLHNGTDMCoarsenessp = 0.000p < 0.05
w1-LHLFirst OrderEnergyp = 0.010p < 0.05
w2-LHLFirst OrderTotalEnergyp = 0.017p < 0.05
w3-LHLGLCMIdmnp = 0.025p < 0.05
w4-LHLGLCMIdnp = 0.036p < 0.05
w5-LHLGLDMDependenceNonUniformityp = 0.000p < 0.05
w6-LHLGLDMGrayLevelNonUniformityp = 0.005p < 0.05
w7-LHLGLDMLargeDependenceHighGrayLevelEmphasisp = 0.016p < 0.05
w8-LHLGLDMLowGrayLevelEmphasisp = 0.041p < 0.05
w9-LHLGLDMSmallDependenceLowGrayLevelEmphasisp = 0.001p < 0.05
w10-LHLGLRLMGrayLevelNonUniformityp = 0.002p < 0.05
w11-LHLGLRLMLowGrayLevelRunEmphasisp = 0.040p < 0.05
w12-LHLGLRLMRunLengthNonUniformityp = 0.000p < 0.05
w13-LHLGLRLMShortRunLowGrayLevelEmphasisp = 0.029p < 0.05
w14-LHLGLSZMGrayLevelNonUniformityp = 0.000p < 0.05
w15-LHLGLSZMLargeAreaHighGrayLevelEmphasisp = 0.005p < 0.05
w16-LHLGLSZMLowGrayLevelZoneEmphasisp = 0.028p < 0.05
w17-LHLGLSZMSizeZoneNonUniformityp = 0.001p < 0.05
w18-LHLGLSZMSmallAreaLowGrayLevelEmphasisp = 0.017p < 0.05
w19-LHLNGTDMCoarsenessp = 0.000p < 0.05
w1-LHHFirst OrderEnergyp = 0.009p < 0.05
w2-LHHFirst OrderTotalEnergyp = 0.016p < 0.05
w3-LHHGLDMDependenceNonUniformityp = 0.000p < 0.05
w4-LHHGLDMGrayLevelNonUniformityp = 0.004p < 0.05
w5-LHHGLDMLargeDependenceHighGrayLevelEmphasisp = 0.023p < 0.05
w6-LHHGLDMSmallDependenceLowGrayLevelEmphasisp = 0.004p < 0.05
w7-LHHGLRLMGrayLevelNonUniformityp = 0.002p < 0.05
w8-LHHGLRLMRunLengthNonUniformityp = 0.000p < 0.05
w9-LHHGLSZMGrayLevelNonUniformityp = 0.000p < 0.05
w10-LHHGLSZMLargeAreaHighGrayLevelEmphasisp = 0.012p < 0.05
w11-LHHGLSZMLowGrayLevelZoneEmphasisp = 0.043p < 0.05
w12-LHHGLSZMSizeZoneNonUniformityp = 0.003p < 0.05
w13-LHHGLSZMSmallAreaLowGrayLevelEmphasisp = 0.025p < 0.05
w14-LHHNGTDMCoarsenessp = 0.000p < 0.05
w1-LLLGLCMIdmnp = 0.002p < 0.05
w2-LLLGLCMIdnp = 0.003p < 0.05
w3-LLLGLCMJointEntropyp = 0.023p < 0.05
w4-LLLGLDMDependenceEntropyp = 0.004p < 0.05
w5-LLLGLDMDependenceNonUniformityp = 0.000p < 0.05
w6-LLLGLDMGrayLevelNonUniformityp = 0.012p < 0.05
w7-LLLGLDMLowGrayLevelEmphasisp = 0.018p < 0.05
w8-LLLGLDMSmallDependenceLowGrayLevelEmphasisp = 0.002p < 0.05
w9-LLLGLRLMGrayLevelNonUniformityp = 0.008p < 0.05
w10-LLLGLRLMLongRunLowGrayLevelEmphasisp = 0.025p < 0.05
w11-LLLGLRLMLowGrayLevelRunEmphasisp = 0.016p < 0.05
w12-LLLGLRLMRunLengthNonUniformityp = 0.000p < 0.05
w13-LLLGLRLMShortRunLowGrayLevelEmphasisp = 0.015p < 0.05
w14-LLLGLSZMGrayLevelNonUniformityp = 0.000p < 0.05
w15-LLLGLSZMLargeAreaHighGrayLevelEmphasisp = 0.004p < 0.05
w16-LLLGLSZMLowGrayLevelZoneEmphasisp = 0.013p < 0.05
w17-LLLGLSZMSizeZoneNonUniformityp = 0.000p < 0.05
w18-LLLGLSZMSmallAreaLowGrayLevelEmphasisp = 0.006p < 0.05
w19-LLLGLSZMZoneEntropyp = 0.010p < 0.05
w20-LLLNGTDMCoarsenessp = 0.000p < 0.05
w1-HLLFirst OrderEnergyp = 0.010p < 0.05
w2-HLLFirst OrderTotalEnergyp = 0.025p < 0.05
w3-HLLGLDMDependenceNonUniformityp = 0.000p < 0.05
w4-HLLGLDMGrayLevelNonUniformityp = 0.007p < 0.05
w5-HLLGLDMSmallDependenceLowGrayLevelEmphasisp = 0.008p < 0.05
w6-HLLGLRLMGrayLevelNonUniformityp = 0.002p < 0.05
w7-HLLGLRLMRunLengthNonUniformityp = 0.000p < 0.05
w8-HLLGLSZMGrayLevelNonUniformityp = 0.000p < 0.05
w9-HLLGLSZMLargeAreaHighGrayLevelEmphasisp = 0.026p < 0.05
w10-HLLGLSZMSizeZoneNonUniformityp = 0.001p < 0.05
w11-HLLNGTDMCoarsenessp = 0.000p < 0.05
w1-HLHFirst OrderEnergyp = 0.002p < 0.05
w2-HLHFirst OrderTotalEnergyp = 0.005p < 0.05
w3-HLHGLCMCorrelationp = 0.047p < 0.05
w4-HLHGLDMDependenceNonUniformityp = 0.000p < 0.01
w5-HLHGLDMGrayLevelNonUniformityp = 0.006p < 0.01
w6-HLHGLDMLargeDependenceHighGrayLevelEmphasisp = 0.026p < 0.05
w7-HLHGLDMLowGrayLevelEmphasisp = 0.044p < 0.05
w8-HLHGLDMSmallDependenceLowGrayLevelEmphasisp = 0.011p < 0.05
w9-HLHGLRLMGrayLevelNonUniformityp = 0.002p < 0.01
w10-HLHGLRLMLowGrayLevelRunEmphasisp = 0.043p < 0.05
w11-HLHGLRLMRunLengthNonUniformityp = 0.000p < 0.01
w12-HLHGLRLMShortRunLowGrayLevelEmphasisp = 0.035p < 0.05
w13-HLHGLSZMGrayLevelNonUniformityp = 0.000p < 0.01
w14-HLHGLSZMLargeAreaHighGrayLevelEmphasisp = 0.022p < 0.05
w15-HLHGLSZMLowGrayLevelZoneEmphasisp = 0.023p < 0.05
w16-HLHGLSZMSizeZoneNonUniformityp = 0.001p < 0.01
w17-HLHGLSZMSmallAreaLowGrayLevelEmphasisp = 0.014p < 0.05
w18-HLHGLSZMZoneEntropyp = 0.022p < 0.05
w19-HLHNGTDMCoarsenessp = 0.000p < 0.01
w1-HHLFirst OrderEnergyp = 0.012p < 0.05
w2-HHLFirst OrderTotalEnergyp = 0.025p < 0.05
w3-HHLGLDMDependenceNonUniformityp = 0.000p < 0.01
w4-HHLGLDMGrayLevelNonUniformityp = 0.001p < 0.01
w5-HHLGLDMLargeDependenceHighGrayLevelEmphasisp = 0.034p < 0.05
w6-HHLGLDMSmallDependenceLowGrayLevelEmphasisp = 0.002p < 0.01
w7-HHLGLRLMGrayLevelNonUniformityp = 0.000p < 0.01
w8-HHLGLRLMRunLengthNonUniformityp = 0.000p < 0.01
w9-HHLGLSZMGrayLevelNonUniformityp = 0.000p < 0.01
w10-HHLGLSZMLargeAreaHighGrayLevelEmphasisp = 0.001p < 0.01
w11-HHLGLSZMLowGrayLevelZoneEmphasisp = 0.049p < 0.05
w12-HHLGLSZMSizeZoneNonUniformityp = 0.025p < 0.05
w13-HHLNGTDMCoarsenessp = 0.000p < 0.01
w1-HHHFirst OrderEnergyp = 0.010p < 0.01
w2-HHHFirst OrderTotalEnergyp = 0.021p < 0.05
w3-HHHGLDMDependenceNonUniformityp = 0.000p < 0.01
w4-HHHGLDMGrayLevelNonUniformityp = 0.001p < 0.01
w5-HHHGLDMLargeDependenceHighGrayLevelEmphasisp = 0.042p < 0.05
w6-HHHGLDMSmallDependenceLowGrayLevelEmphasisp = 0.007p < 0.01
w7-HHHGLRLMGrayLevelNonUniformityp = 0.000p < 0.01
w8-HHHGLRLMRunLengthNonUniformityp = 0.000p < 0.01
w9-HHHGLSZMGrayLevelNonUniformityp = 0.000p < 0.01
w10-HHHGLSZMLargeAreaHighGrayLevelEmphasisp = 0.001p < 0.01
w11-HHHGLSZMSizeZoneNonUniformityp = 0.004p < 0.01
w12-HHHNGTDMCoarsenessp = 0.000p < 0.01

References

  1. Acharya, U.R.; Fernandes, S.L.; WeiKoh, J.E.; Ciaccio, E.J.; Fabell, M.K.M.; Tanik, U.J.; Rajinikanth, V.; Yeong, C.H. Automated Detection of Alzheimer’s Disease Using Brain MRI Images—A Study with Various Feature Extraction Techniques. J. Med. Syst. 2019, 43, 302. [Google Scholar] [CrossRef] [PubMed]
  2. Louis, D.N.; Perry, A.; Reifenberger, G.; Von Deimling, A.; Figarella-Branger, M.; Cavenee, W.K.; Ohgaki, H.; Wiestler, O.D.; Kleihues, P.; Ellison, D.W. The 2016 World Health Organization Classification of Tumors of the Central Nervous System: A summary. Acta Neuropathol. 2016, 131, 803–820. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Badža, M.M.; Barjaktarovic, M. Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network. Appl. Sci. 2020, 10, 1999. [Google Scholar] [CrossRef] [Green Version]
  4. Brat, D.J.; Verhaak, R.G.; Aldape, K.D.; Yung, W.K.; Salama, S.R.; Cooper, L.A.D.; Rheinbay, E.; Miller, C.R.; Vitucci, M.; Morozova, O.; et al. Cancer Genome Atlas Research Network. Comprehensive, integrative genomic analysis of diffuse lowergrade gliomas. N. Engl. J. Med. 2015, 372, 2481–2498. [Google Scholar]
  5. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. An ensemble learning approach for brain cancer detection exploiting radiomic features. Comput. Methods Programs Biomed. 2020, 185, 105134. [Google Scholar] [CrossRef]
  6. Chen, W.; Liu, B.; Peng, S.; Sun, J.; Qiao, X. Computer-Aided Grading of Gliomas Combining Automatic Segmentation and Radiomics. Int. J. Biomed. Imaging 2018, 2018, 1–11. [Google Scholar] [CrossRef] [Green Version]
  7. Chong, Y.; Kim, J.-H.; Lee, H.Y.; Ahn, Y.C.; Lee, K.S.; Ahn, M.-J.; Kim, J.; Shim, Y.M.; Han, J.; Choi, Y.-L. Quantitative CT Variables Enabling Response Prediction in Neoadjuvant Therapy with EGFR-TKIs: Are They Different from Those in Neoadjuvant Concurrent Chemoradiotherapy? PLoS ONE 2014, 9, e88598. [Google Scholar] [CrossRef]
  8. Kolossváry, M.; Kellermayer, M.; Merkely, B.; Maurovich-Horvat, P. Cardiac Computed Tomography Radiomics. J. Thorac. Imaging 2018, 33, 26–34. [Google Scholar] [CrossRef] [Green Version]
  9. Limkin, E.J.C.; Sun, R.; Dercle, L.; Zacharaki, E.I.; Robert, C.; Reuzé, S.; Schernberg, A.; Paragios, N.; Deutsch, E.; Ferté, C. Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology. Ann. Oncol. 2017, 28, 1191–1206. [Google Scholar] [CrossRef]
  10. Tian, Q.; Yan, L.-F.; Zhang, X.; Hu, Y.-C.; Han, Y.; Liu, Z.-C.; Nan, H.-Y.; Sun, Q.; Sun, Y.-Z.; Yang, Y.; et al. Radiomics strategy for glioma grading using texture features from multiparametric MRI. J. Magn. Reson. Imaging 2018, 48, 1518–1528. [Google Scholar] [CrossRef]
  11. Cui, G.; Jeong, J.; Press, B.; Lei, Y.; Shu, H.-K.; Liu, T.; Curran, W.; Mao, H.; Yang, X. Machine-learning-based Classification of Lower-grade gliomas and High-grade gliomas using Radiomic Features in Multi-parametric MRI 2019. arXiv Preprint 2019, arXiv:1911.10145. [Google Scholar]
  12. Hsieh, K.L.-C.; Chen, C.-Y.; Lo, C.-M. Quantitative glioma grading using transformed gray-scale invariant textures of MRI. Comput. Boil. Med. 2017, 83, 102–108. [Google Scholar] [CrossRef] [PubMed]
  13. Hsieh, K.L.-C.; Lo, C.-M.; Hsiao, C.-J. Computer-aided grading of gliomas based on local and global MRI features. Comput. Methods Programs Biomed. 2017, 139, 31–38. [Google Scholar] [CrossRef] [PubMed]
  14. Qin, J.-B.; Liu, Z.; Zhang, H.; Shen, C.; Wang, X.-C.; Tan, Y.; Wang, S.; Wu, X.-F.; Tian, J. Grading of Gliomas by Using Radiomic Features on Multiple Magnetic Resonance Imaging (MRI) Sequences. Med. Sci. Monit. 2017, 23, 2168–2178. [Google Scholar] [CrossRef] [Green Version]
  15. Cho, H.-H.; Lee, S.-H.; Kim, J.; Park, H. Classification of the glioma grading using radiomics analysis. PeerJ 2018, 6, e5982. [Google Scholar] [CrossRef] [Green Version]
  16. Mohan, G.; Subashini, M.M. MRI based medical image analysis: Survey on brain tumor grade classification. Biomed. Signal. Process. Control. 2018, 39, 139–161. [Google Scholar] [CrossRef]
  17. Ramteke, R.; Monali, K.Y. Automatic medical image classification and abnormality detection using k-nearest neighbour. Int. J. Adv. Comput. Res. 2012, 2, 190. [Google Scholar]
  18. Gadpayleand, P.; Mahajani, P. Detection and classification of brain tumor in MRI images. Int. J. Electr. Comput. Eng. 2013, 5, 45–49. [Google Scholar]
  19. Ghosh, D.; Bandyopadhyay, S.K. Brain tumor detection from MRI image: An approach. Int. J. Appl. Res. 2017, 3, 1152–1159. [Google Scholar]
  20. Abidin, A.Z.; Dar, I.; D’Souza, A.M.; Lin, E.P.; Wismüller, A. Investigating a quantitative radiomics approach for brain tumor classification. In Proceedings of the Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, San Diego, CA, USA, 19–21 February 2019; 2019; p. 109530. [Google Scholar]
  21. George, D.N.; Jehlol, H.B.; Oleiwi, A.S.A. Brain tumor detection using shape features and machine learning algorithms. Int. J. Adv. Res. Comput. Sci. Softw. Eng. (IJARCSSE) 2015, 5, 454–459. [Google Scholar]
  22. Bahadure, N.; Ray, A.K.; Thethi, H.P. Image Analysis for MRI Based Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM. Int. J. Biomed. Imaging 2017, 2017, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Nabizadeh, N.; Kubat, M. Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features. Comput. Electr. Eng. 2015, 45, 286–301. [Google Scholar] [CrossRef]
  24. Vial, A.; Stirling, D.; Field, M.; Ros, M.; Ritz, C.; Carolan, M.; Holloway, L.; Miller, A.A. The role of deep learning and radiomic feature extraction in cancer-specific predictive modelling: A review. Transl. Cancer Res. 2018, 7, 803–816. [Google Scholar] [CrossRef]
  25. Vezhnevets, V.; Konouchine, V. GrowCut: Interactive multi-label ND image segmentation by cellular automata. Proc. Graph. 2005, 150–156. [Google Scholar]
  26. Clark, K.W.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [Green Version]
  27. Akkus, Z.; Ali, I.; Sedlář, J.; Agrawal, J.P.; Parney, I.F.; Giannini, C.; Erickson, B.J. Predicting Deletion of Chromosomal Arms 1p/19q in Low-Grade Gliomas from MR Images Using Machine Intelligence. J. Digit. Imaging 2017, 30, 469–476. [Google Scholar] [CrossRef] [Green Version]
  28. Erickson, B.; Akkus, Z.; Sedlar, J.; Kofiatis, P. Data from LGG-1p19qDeletion. The Cancer Imaging Archive 2017. Available online: https://doi.org/10.7937/K9/TCIA.2017.dwehtz9v (accessed on 15 February 2020).
  29. Arya, A.; Bhateja, V.; Nigam, M.; Bhadauria, A.S. Enhancement of brain MR-T1/T2 images using mathematical morphology. In Information and Communication Technology for Sustainable Development; Springer: Singapore, 2020; pp. 833–840. [Google Scholar]
  30. Amin, J.; Sharif, M.; Gul, N.; Yasmin, M.; Shad, S.A. Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network. Pattern Recognit. Lett. 2020, 129, 115–122. [Google Scholar] [CrossRef]
  31. Szwarc, P.; Kawa, J.; Bobek-Billewicz, B.; Pietka, E. Segmentation of brain tumours in MR images using fuzzy clustering techniques. In Proceedings of the Computer Assisted Radiology and Surgery (CARS), Geneva, Switzerland, 23–26 June 2010; pp. 20–24. [Google Scholar]
  32. Gibbs, P.; Buckley, D.; Blackband, S.J.; Horsman, A. Tumour volume determination from MR images by morphological segmentation. Phys. Med. Boil. 1996, 41, 2437–2446. [Google Scholar] [CrossRef]
  33. Droske, M.; Meyer, B.; Rumpf, M.; Schaller, C. An adaptive level set method for interactive segmentation of intracranial tumors. Neurol. Res. 2005, 27, 363–370. [Google Scholar] [CrossRef]
  34. Egger, J.; Colen, R.R.; Freisleben, B.; Nimsky, C. Manual refinement system for graph-based segmentation results in the medical domain. J. Med. Syst. 2011, 36, 2829–2839. [Google Scholar] [CrossRef] [Green Version]
  35. Wang, Y.; Che, X.; Ma, S. Nonlinear filtering based on 3D wavelet transform for MRI denoising. EURASIP J. Adv. Signal. Process. 2012, 2012, 40. [Google Scholar] [CrossRef] [Green Version]
  36. Kim, T.-Y.; Cho, N.-H.; Jeong, G.-B.; Bengtsson, E.; Choi, H.-K. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading. Comput. Math. Methods Med. 2014, 2014, 1–12. [Google Scholar] [CrossRef] [PubMed]
  37. Artzi, M.; Bressler, I.; Ben Bashat, D. Differentiation between glioblastoma, brain metastasis and subtypes using radiomics analysis. J. Magn. Reson. Imaging 2019, 50, 519–528. [Google Scholar] [CrossRef] [PubMed]
  38. Ullah, Z.; Farooq, M.U.; Lee, S.-H.; An, D. A Hybrid Image Enhancement Based Brain MRI Images Classification Technique. Med. Hypotheses 2020, 109922. [Google Scholar] [CrossRef]
  39. Strang, G.; Nguyen, T. Wavelets and Filter Banks; Wellesley-Cambridge Press: Wellesley, MA, USA, 1997; ISBN 0-9614088-7-1. [Google Scholar]
  40. Mallat, S.G.; Heil, C.; Walnut, D.F. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef] [Green Version]
  41. Daubechies, I. Ten Lectures on Wavelets (CBMS-NSF Regional Conference Series in Applied Mathematics); SIAM: Philadelphia, PA, USA, 1990. [Google Scholar]
  42. Mallat, S. A Wavelet Tour of Signal Processing: The Sparse Way, 3rd ed.; Academic Press: Cambridge, MA, USA, 2009. [Google Scholar]
  43. Sharif, I.; Khare, S. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2014, 40, 937–941. [Google Scholar] [CrossRef] [Green Version]
  44. Daubechies, I. Orthonormal bases of compactly supported wavelets. Commun. Pure Appl. Math. 1988, 41, 909–996. [Google Scholar] [CrossRef] [Green Version]
  45. Soufi, M.; Arimura, H.; Nagami, N. Identification of optimal mother wavelets in survival prediction of lung cancer patients using wavelet decomposition-based radiomic features. Med. Phys. 2018, 45, 5116–5128. [Google Scholar] [CrossRef]
  46. Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef] [Green Version]
  47. Van Griethuysen, J.J.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan, R.G.; Fillion-Robin, J.-C.; Pieper, S.; Aerts, H.J.W.L. Computational Radiomics System to Decode the Radiographic Phenotype. Cancer Res. 2017, 77, e104–e107. [Google Scholar] [CrossRef] [Green Version]
  48. Lambin, P.; Leijenaar, R.T.; Deist, T.M.; Peerlings, J.; De Jong, E.E.; Van Timmeren, J.E.; Sanduleanu, S.; LaRue, R.T.; Even, A.J.; Jochems, A.; et al. Radiomics: The bridge between medical imaging and personalized medicine. Nat. Rev. Clin. Oncol. 2017, 14, 749–762. [Google Scholar] [CrossRef] [PubMed]
  49. Thibault, G.; Angulo, J.; Meyer, F. Advanced Statistical Matrices for Texture Characterization: Application to Cell Classification. IEEE Trans. Biomed. Eng. 2013, 61, 630–637. [Google Scholar] [CrossRef] [PubMed]
  50. IBM SPSS. IBM SPSS Statistics; Version 21; International Business Machines Corp: Boston, MA, USA, 2012; p. 126. [Google Scholar]
  51. Altın, Ş. Investigation of Relationships Between Salesperson’s Perceptions of Ethics towards Customers and Job Satisfaction with Structural Equation Modeling, Social and Human Sciences Studies; Çizgi Kitabevi: Konya, Turkey, 2018; pp. 276–285. ISBN 978-605-196-190-3. [Google Scholar]
  52. Kalaycı, Ş. SPSS Applied Multivariate Statistical Techniques; Asil Publication: Ankara, Turkey, 2010; p. 5. [Google Scholar]
  53. Tharani, S.; Yamini, C. Classification using Convolutional Neural Network for Heart and Diabetics Datasets. IJARCCE 2016, 5, 417–422. [Google Scholar] [CrossRef]
  54. Ravì, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.P.L.; Yang, G.-Z. Deep Learning for Health Informatics. IEEE J. Biomed. Health Inform. 2017, 21, 4–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Mohsen, H.; El-Dahshan, E.-S.A.; Ei-Horbaty, E.-S.M.; Salem, A.-B.M. Classification using deep learning neural networks for brain tumors. Futur. Comput. Informatics J. 2018, 3, 68–71. [Google Scholar] [CrossRef]
  56. Nishio, M.; Sugiyama, O.; Yakami, M.; Ueno, S.; Kubo, T.; Kuroda, T.; Togashi, K. Computer-aided diagnosis of lung nodule classification between benign nodule, primary lung cancer, and metastatic lung cancer at different image size using deep convolutional neural network with transfer learning. PLoS ONE 2018, 13, e0200721. [Google Scholar] [CrossRef] [Green Version]
  57. Zhang, W.; Xu, L.; Li, Z.; Lu, Q.; Liu, Y. A Deep-Intelligence Framework for Online Video Processing. IEEE Softw. 2016, 33, 44–51. [Google Scholar] [CrossRef]
  58. Arslan, R.S.; Barişçi, N. Development of Output Correction Methodology for Long Short Term Memory-Based Speech Recognition. Sustainability 2019, 11, 4250. [Google Scholar] [CrossRef] [Green Version]
  59. Stanitsas, P.; Cherian, A.; Li, X.; Truskinovsky, A.; Morellas, V.; Papanikolopoulos, N. Evaluation of feature descriptors for cancerous tissue recognition. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 1490–1495. [Google Scholar]
  60. Gao, Z.; Wang, L.; Zhou, L.; Zhang, J. HEp-2 Cell Image Classification With Deep Convolutional Neural Networks. IEEE J. Biomed. Health Inform. 2016, 21, 416–428. [Google Scholar] [CrossRef] [Green Version]
  61. Hossain, M.S.; Amin, S.U.; Alsulaiman, M.; Muhammad, G. Applying Deep Learning for Epilepsy Seizure Detection and Brain Mapping Visualization. ACM Trans. Multimed. Comput. Commun. Appl. 2019, 15, 1–17. [Google Scholar] [CrossRef]
  62. Candel, A.; Parmar, V.; LeDell, E.; Arora, A. Deep Learning with H2O; H2O. ai, Inc.: Mountain View, CA, USA, September 2016. [Google Scholar]
  63. Cook, D. Practical Machine Learning with H2O: Powerful, Scalable Techniques for Deep Learning and AI; O’Reilly Media, Inc.: Sebastopol, CA, USA, December 2016; ISBN 978-1-491-96460-6. [Google Scholar]
  64. Liu, L.; Chen, R.-C. A novel passenger flow prediction model using deep learning methods. Transp. Res. Part C Emerg. Technol. 2017, 84, 74–91. [Google Scholar] [CrossRef]
  65. Varghese, B.A.; Cen, S.Y.; Hwang, D.H.; Duddalwar, V.A. Texture Analysis of Imaging: What Radiologists Need to Know. Am. J. Roentgenol. 2019, 212, 520–528. [Google Scholar] [CrossRef]
  66. Shiri, I.; Ghafarian, P.; Geramifar, P.; Leung, K.H.-Y.; Ghelichoghli, M.; Oveisi, M.; Rahmim, A.; Ay, M.R. Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC). Eur. Radiol. 2019, 29, 6867–6879. [Google Scholar] [CrossRef] [PubMed]
  67. Meyer, P.T.; Schreckenberger, M.; Spetzger, U.; Meyer, G.F.; Sabri, O.; Setani, K.S.; Zeggel, T.; Buell, U. Comparison of visual and ROI-based brain tumour grading using 18F-FDG PET: ROC analyses. Eur. J. Nucl. Med. Mol. Imaging 2001, 28, 165–174. [Google Scholar] [CrossRef] [PubMed]
  68. Hakyemez, B.; Erdogan, C.; Ercan, I.; Ergin, N.; Uysal, S.; Atahan, Ş. High-grade and low-grade gliomas: Differentiation by using perfusion MR imaging. Clin. Radiol. 2005, 60, 493–502. [Google Scholar] [CrossRef]
  69. Weng, W.; Chen, X.; Gong, S.; Guo, L.; Zhang, X. Preoperative neutrophil–lymphocyte ratio correlated with glioma grading and glioblastoma survival. Neurol. Res. 2018, 40, 917–922. [Google Scholar] [CrossRef] [PubMed]
  70. Arevalo-Perez, J.; Peck, K.K.; Young, R.J.; Holodny, A.I.; Karimi, S.; Lyo, J. Dynamic Contrast-Enhanced Perfusion MRI and Diffusion-Weighted Imaging in Grading of Gliomas. J. Neuroimaging 2015, 25, 792–798. [Google Scholar] [CrossRef] [Green Version]
  71. Coroller, T.P.; Bi, W.L.; Huynh, E.; Abedalthagafi, M.; Aizer, A.A.; Greenwald, N.F.; Parmar, C.; Narayan, V.; Wu, W.W.; De Moura, S.M.; et al. Radiographic prediction of meningioma grade by semantic and radiomic features. PLoS ONE 2017, 12, e0187908. [Google Scholar] [CrossRef] [Green Version]
  72. Rundo, L.; Militello, C.; Russo, G.; Vitabile, S.; Gilardi, M.C.; Mauri, G. GTV cut for neuro-radiosurgery treatment planning: An MRI brain cancer seeded image segmentation method based on a cellular automata model. Nat. Comput. 2018, 17, 521–536. [Google Scholar] [CrossRef]
  73. Sompong, C.; Wongthanavasu, S. An efficient brain tumor segmentation based on cellular automata and improved tumor-cut algorithm. Expert Syst. Appl. 2017, 72, 231–244. [Google Scholar] [CrossRef]
  74. Rundo, L.; Stefano, A.; Militello, C.; Russo, G.; Sabini, M.G.; D’Arrigo, C.; Marletta, F.; Ippolito, M.; Mauri, G.; Vitabile, S.; et al. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning. Comput. Methods Programs Biomed. 2017, 144, 77–96. [Google Scholar] [CrossRef] [PubMed]
  75. Zhu, L.; Kolesov, I.; Gao, Y.; Kikinis, R.; Tannenbaum, A. An effective interactive medical image segmentation method using fast growcut. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Interactive Medical Image Computing Workshop, Boston, MA, USA, 14–18 September 2014. [Google Scholar]
  76. Kharrat, A.; Ben Halima, M.; Ben Ayed, M. MRI brain tumor classification using Support Vector Machines and meta-heuristic method. In Proceedings of the 2015 15th International Conference on Intelligent Systems Design and Applications (ISDA), Marrakech, Morocco, 14–16 December 2015; pp. 446–451. [Google Scholar]
  77. Welch, M.L.; McIntosh, C.; Haibe-Kains, B.; Milosevic, M.; Wee, L.; Dekker, A.; Huang, S.; Purdie, T.G.; O’Sullivan, B.; Aerts, H.J.W.L.; et al. Vulnerabilities of radiomic signature development: The need for safeguards. Radiother. Oncol. 2019, 130, 2–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  78. Bi, X.; Liu, J.G.; Cao, Y.S. Classification of Low-grade and High-grade Glioma using Multiparametric Radiomics Model. In Proceedings of the 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chengdu, China, 15–17 March 2019; pp. 574–577. [Google Scholar]
  79. Kharat, K.D.; Kulkarni, P.P.; Nagori, M.B. Brain Tumor Classification Using Neural Network Based Methods. Int. J. Comput. Sci. Inform. 2012, 1, 2231–5292. [Google Scholar]
  80. Dong, F.; Li, Q.; Jiang, B.; Zhu, X.; Zeng, Q.; Huang, P.; Chen, S.; Zhang, M. Differentiation of supratentorial single brain metastasis and glioblastoma by using peri-enhancing oedema region-derived radiomic features and multiple classifiers. Eur. Radiol. 2020, 1–8. [Google Scholar] [CrossRef]
  81. Huang, C.-Y.; Lee, C.-C.; Yang, H.-C.; Lin, C.-J.; Wu, H.-M.; Chung, W.-Y.; Shiau, C.-Y.; Guo, W.-Y.; Pan, D.H.-C.; Peng, S.-J. Radiomics as prognostic factor in brain metastases treated with Gamma Knife radiosurgery. J. Neuro-Oncol. 2020, 146, 439–449. [Google Scholar] [CrossRef]
  82. Mouraviev, A.; Detsky, J.; Sahgal, A.; Ruschin, M.; Lee, Y.K.; Karam, I.; Heyn, C.; Stanisz, G.J.; Martel, A.L. Use of radiomics for the prediction of local control of brain metastases after stereotactic radiosurgery. J. Neuro-Oncol. 2020, 22, 797–805. [Google Scholar] [CrossRef]
  83. Khawaldeh, S.; Pervaiz, U.; Rafiq, A.; Alkhawaldeh, R.S. Noninvasive Grading of Glioma Tumor Using Magnetic Resonance Imaging with Convolutional Neural Networks. Appl. Sci. 2017, 8, 27. [Google Scholar] [CrossRef] [Green Version]
  84. Sun, L.; Zhang, S.; Chen, H.; Luo, L. Brain Tumor Segmentation and Survival Prediction Using Multimodal MRI Scans With Deep Learning. Front. Mol. Neurosci. 2019, 13, 810. [Google Scholar] [CrossRef] [Green Version]
  85. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  86. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  87. Qin, P.; Zhang, J.; Zeng, J.; Liu, H.; Cui, Y. A framework combining DNN and level-set method to segment brain tumor in multi-modalities MR image. Soft Comput. 2019, 23, 9237–9251. [Google Scholar] [CrossRef]
  88. Zia, R.; Akhtar, P.; Aziz, A. A new rectangular window based image cropping method for generalization of brain neoplasm classification systems. Int. J. Imaging Syst. Technol. 2017, 28, 153–162. [Google Scholar] [CrossRef]
  89. Rathi, V.G.P.; Palani, S. Brain tumor detection and classification using deep learning classifier on MRI images. Res. J. Appl. Sci. Eng. Technol. 2015, 10, 177–187. [Google Scholar]
  90. Ahammed, M.K.V.; Rajendran, V.R.; Paul, J. Glioma Tumor Grade Identification Using Artificial Intelligent Techniques. J. Med. Syst. 2019, 43, 113. [Google Scholar] [CrossRef]
Figure 1. The block diagram of the proposed system.
Figure 1. The block diagram of the proposed system.
Applsci 10 06296 g001
Figure 2. Gliomas of Grades II (a) and Grade III (b) as they appeared on brain magnetic resonance images from (The Cancer Imaging Archive) TCIA.
Figure 2. Gliomas of Grades II (a) and Grade III (b) as they appeared on brain magnetic resonance images from (The Cancer Imaging Archive) TCIA.
Applsci 10 06296 g002
Figure 3. 3D tumor segmentation procedures for Grade II brain tumor in 3D Slicer with the ROI identified (a), application of GrowCut (b), and 3D visualization of the tumor (c).
Figure 3. 3D tumor segmentation procedures for Grade II brain tumor in 3D Slicer with the ROI identified (a), application of GrowCut (b), and 3D visualization of the tumor (c).
Applsci 10 06296 g003
Figure 4. Correlation matrices of wavelet filters: (a) HHH, (b) HHL, (c) HLH, (d) HLL, (e) LHH, (f) LHL, (g) LLH, (h) LLL.
Figure 4. Correlation matrices of wavelet filters: (a) HHH, (b) HHL, (c) HLH, (d) HLL, (e) LHH, (f) LHL, (g) LLH, (h) LLL.
Applsci 10 06296 g004aApplsci 10 06296 g004b
Figure 5. Confusion matrices of all wavelet sub-bands.
Figure 5. Confusion matrices of all wavelet sub-bands.
Applsci 10 06296 g005aApplsci 10 06296 g005b
Figure 6. ROC curve for w-HHH filter classification.
Figure 6. ROC curve for w-HHH filter classification.
Applsci 10 06296 g006
Table 1. DNN hyper-parameters.
Table 1. DNN hyper-parameters.
ParametersValues
OptimizerStochastic gradient descent
Epsilon0.00000001
Elastic regularization0.001
Learning rate0.005
LossCross-entropy
Epoch10
Mini-batch1
Table 2. Computational Complexity of Classification Method.
Table 2. Computational Complexity of Classification Method.
Wavelet Sub-BandsWeight/BiasesMemory WorkloadTraining Samples
HHH43.202516.0 KB784
HHL43.402518.5 KB821
HLH44.602533.6 KB772
HLL43.002513.5 KB790
LHH43.602521.0 KB812
LHL44.602533.6 KB826
LLH44.402531.1 KB816
LLL44.802536.1 KB837
Table 3. Training, validation, and testing statistics.
Table 3. Training, validation, and testing statistics.
GroupsGrade IIGrade IIITotal
Training492574
Validation12921
Testing161026
Table 4. Confusion Matrix.
Table 4. Confusion Matrix.
Predicted Class
PositiveNegative
Actual
Class
Positivetp (true positive)fn (false negative)
Negativefp (false positive)tn (true negative)
Table 5. Classification performance of Wavelet Filters.
Table 5. Classification performance of Wavelet Filters.
3D-Wavelet Sub-BandsPrecision (%)Recall (%)F1 (%)Accuracy (%)ROC Area (%)
W-HHH0.94121.00000.96970.96150.9875
W-HHL0.80001.00000.88890.84620.8562
W-HLH0.90910.62500.74070.73080.8375
W-HLL1.00000.68750.81480.80770.8562
W-LHH0.92860.81250.86670.84620.8437
W-LHL0.85710.75000.80000.76920.8000
W-LLH0.87500.87500.87500.84620.8187
W-LLL0.78950.93750.85710.80770.8125

Share and Cite

MDPI and ACS Style

Çinarer, G.; Emiroğlu, B.G.; Yurttakal, A.H. Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features. Appl. Sci. 2020, 10, 6296. https://doi.org/10.3390/app10186296

AMA Style

Çinarer G, Emiroğlu BG, Yurttakal AH. Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features. Applied Sciences. 2020; 10(18):6296. https://doi.org/10.3390/app10186296

Chicago/Turabian Style

Çinarer, Gökalp, Bülent Gürsel Emiroğlu, and Ahmet Haşim Yurttakal. 2020. "Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features" Applied Sciences 10, no. 18: 6296. https://doi.org/10.3390/app10186296

APA Style

Çinarer, G., Emiroğlu, B. G., & Yurttakal, A. H. (2020). Prediction of Glioma Grades Using Deep Learning with Wavelet Radiomic Features. Applied Sciences, 10(18), 6296. https://doi.org/10.3390/app10186296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop