Next Article in Journal
Role of Optical Coherence Tomography Imaging in Predicting Progression of Age-Related Macular Disease: A Survey
Next Article in Special Issue
Parkinson’s Disease Detection from Resting-State EEG Signals Using Common Spatial Pattern, Entropy, and Machine Learning Techniques
Previous Article in Journal
The Role of Diagnostic Injections in Spinal Disorders: A Narrative Review
Previous Article in Special Issue
The Feasibility of Differentiating Lewy Body Dementia and Alzheimer’s Disease by Deep Learning Using ECD SPECT Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson’s Disease Speech Data

1
College of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400000, China
2
Chongqing Academy of Educational Sciences, Chongqing 400000, China
*
Authors to whom correspondence should be addressed.
Diagnostics 2021, 11(12), 2312; https://doi.org/10.3390/diagnostics11122312
Submission received: 29 September 2021 / Revised: 24 November 2021 / Accepted: 30 November 2021 / Published: 9 December 2021
(This article belongs to the Special Issue Machine Learning Approaches for Neurodegenerative Diseases Diagnosis)

Abstract

:
As a neurodegenerative disease, Parkinson’s disease (PD) is hard to identify at the early stage, while using speech data to build a machine learning diagnosis model has proved effective in its early diagnosis. However, speech data show high degrees of redundancy, repetition, and unnecessary noise, which influence the accuracy of diagnosis results. Although feature reduction (FR) could alleviate this issue, the traditional FR is one-sided (traditional feature extraction could construct high-quality features without feature preference, while traditional feature selection could achieve feature preference but could not construct high-quality features). To address this issue, the Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model (HBD-SFREM) is proposed in this paper. The major contributions of HBD-SFREM are as follows: (1) The instance space of the deep hierarchy is built by an iterative deep extraction mechanism. (2) The manifold features extraction method embeds the nearest neighbor feature preference method to form the dual-stage feature reduction pair. (3) The dual-stage feature reduction pair is iteratively performed by the AdaBoost mechanism to obtain instances features with higher quality, thus achieving a substantial improvement in model recognition accuracy. (4) The deep hierarchy instance space is integrated into the original instance space to improve the generalization of the algorithm. Three PD speech datasets and a self-collected dataset are used to test HBD-SFREM in this paper. Compared with other FR algorithms and deep learning algorithms, the accuracy of HBD-SFREM in PD speech recognition is improved significantly and would not be affected by a small sample dataset. Thus, HBD-SFREM could give a reference for other related studies.

1. Introduction

Parkinson’s disease (PD) is a neurodegenerative disease with the characteristics of motor stiffness, movement retardation, tremor, and some non-motor symptoms (NMS, like bass disorder, sleep disorder, depression, constipation, pain, and dysarthria). Numerous studies have shown that PD patients will have NMS as the disease develops that seriously affects the quality of life [1].
NMS can be detected at the early stage of the disease, which allows a sound treatment plan to be designed. Dysarthria is the primary NMS and plays a guiding role in the study of PD pathogenesis. In addition, the advantages of speech data collection have made speech analysis gradually become the main analysis method for PD recognition as well as a key research area for early PD recognition [2].
However, speech data exhibit a high rate of redundancy and repetition and contain much unnecessary noise. Feature reduction (FR) could help alleviate this issue. Currently, this topic has attracted extensive attention from researchers and has great research significance [3]. Early FR research on PD speech recognition primarily focused on feature selection, which could be simply considered as the way of selecting the optimal feature subset from the original feature space. Some feature selection algorithms include Relief [4,5,6], (mRMR) [3], SBS [7], PSO [8], SFS [9], LASSO [2,4], Pvalue [10]. Erika R et al. selected the optimal subset of features from the original features, then using the Pvalue algorithm [10]. Sakar and Kursun [11] proposed a new feature selection algorithm based on mutual information, and the model is trained using support vector machines (SVM), achieving an accuracy of 92.75%. Musa Peker [12] used mRMR to identify valid features and then submitted the obtained features into a complex-valued artificial neural network. Benba et al. [13] selected features based on pathology thresholds through a multi-dimensional voice detection procedure (MDPV) and then submitted the obtained features to K-nearest neighbors (KNN) and SVM, achieving an accuracy of 95%. Shirvan RA et al. [14] used genetic algorithms and KNN to determine the optimal features that affected the result of recognition.
Feature extraction is another type of FR algorithm, the idea is to map the high-dimensional features to the low-dimensional space and keep all the information of the original instance as much as possible [15]. Linear approaches were primarily used before, in which PCA [16,17], and LDA [18,19,20,21] were representative methods. Chen et al. [21] developed a PD detection system that used PCA to extract features and trained the model with a fuzzy KNN classifier, which achieved an accuracy of 96.7%. Hariharan M et al. extracted features of PD using PCA and LDA and obtained a high accuracy rate [10]. Linear feature extraction methods generally assume the data in a high-dimensional linear space, which is the opposite of the non-linear characteristics of PD speech datasets in the real world [22,23,24]. Thus, linear feature extraction could not be applied well to non-linear data spaces because it limits the accuracy of PD recognition [25]. Currently, non-linear feature extraction has been developed and applied to PD recognition [19,26,27]. Kernel mapping and deep neural network mapping are two representative types of non-linear feature extraction methods. Yang achieved good results by feature extraction of PD speech data through SFS and PCA with kernel [19]. Derya A proposed the Genetic Algorithm-Wavelet Kernel-Extreme Learning Machine (GA-WK-ELM), and the wavelet kernels were used to map non-linear features from PD speech data [25]. Grover used deep neural networks to process Parkinson’s disease speech data features and predict the severity of PD [26]. Camilo considered multimodal information, including not only speech data of PD patients, but also writing, handwriting data and gait, and posture data and trained the model for recognition according to deep learning methods [27].
Manifold learning is another type of feature extraction method that could be applied to small sample datasets. Locally preserved projection (LPP) is a representative algorithm for manifold learning, which preserves the structure of the nearest neighbor between data samples after feature extraction, while minimizing the dimensions of the features [28]. However, since LPP is the nearest neighbor retention algorithm, most of the improved algorithms based on LPP only focus on the differences between classes and do not consider the large differences within classes [29,30,31]. Liu et al. considered both interclass data aliasing and intraclass data aliasing, which effectively solve these problems [16].
In recent studies, some scholars have attempted to integrate the advantages of feature selection and feature extraction to create hybrid feature processing methods. M. Hariharan et al. [9] proposed a hybrid system using SFS and PCA to process the data feature characteristics and feed the processed bibliography into a least square support vector machine classifier to learn the prediction model. H. Almayyan et al. [32] proposed a hybrid recognition system that uses PCA and Relief for feature processing and SVM combined with recursive feature elimination (SVMRFE) as a classifier to train the model. In addition, the study still used the SMOTE technique in order to equalize and diversify the dataset.
Based on the above analysis, we know that the FR method can solve the problems of high redundancy, high repetition, and noise of speech data. However, traditional feature extraction could construct high-quality features but could not achieve feature preference, while traditional feature selection could achieve feature preference but could not construct high-quality features. The two types of FR methods are different in principle but can be complementary to each other. Thus, it is necessary to propose a feature reduction method that could simultaneously achieve feature preference and high-quality feature construction. Although some related studies have made some progress in this field [21,32], critical problems also remain to be solved: (1) the integration of feature extraction and feature selection always occurs once, then the absence of multiple iterations to find the optimal fusion make it impossible to obtain higher quality merged features; (2) existing methods only consider information on the characteristics of the sample in the original space, and ignoring structural information on the characteristics of the deeper instance. In order to address these issues, the Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson’s disease speech data (HBD-SFREM) is proposed in this study. The major contributions and innovations of this model are listed below.
  • The instance space of the hierarchy is built by an iterative deep extraction mechanism.
  • The manifold feature extraction method embeds the nearest neighbor feature preference method to form a dual-stage feature reduction pair module.
  • The dual-stage feature reduction pair (D-Spair) module is iteratively performed by the AdaBoost mechanism to obtain higher quality features, thus achieving a substantial improvement in model diagnosis accuracy.
  • The deep hierarchy instance space is integrated into the original instance space to enhance the generalization ability of the model.
The writing structure of this paper is given here. Section 2 introduces the principles related to the proposed model; Section 3 describes the experiments designed in this paper as well as the presentation and analysis of the results; Section 4 analyzes the limitations and contributions of this study.

2. Materials and Methods

2.1. Symbol Description

In order to facilitate the presentation of the HBD-SFREM, some symbols need to be defined first. The datasets used in this study are numerical matrices and described as X = [ x 1 , x 2 , , x N ] T = [ X 1 , X 2 , , X C ] T R N × D , where N = N 1 + N 2 + + N C . By default, each row represents an instance, N indicates the number of instances in X . D denotes the dimension of X . C is the category of datasets, the label of instances is expressed as y = [ y 1 , y 2 , , y N ] T R N . The number of instances in each hierarchy is determined by the number of instances in the upper hierarchy and P , where P is the proportion of instances retained when IDEM is performed. The mapping matrix generated by the D-Spair maps R D to R d , where R D represents the high-dimensional dataset, and R d represents the low-dimensional dataset, ( d < D ).

2.2. The Proposed Algorithm

2.2.1. Construction of the Different Hierarchy Instance Space

In this part, the layers H of hierarchical instance spaces and the numbers of independent instance subspace n are used. One of the primary innovations in this paper is that deep hierarchy instance space is constructed based on IDEM. The relationship between the different hierarchies of instance spaces is analyzed by learning instances of different hierarchy spaces, and the generalization ability of the final model will also be improved.
In the IDEM mechanism, π j is used to define the clusters and the clustered partition of data points is denoted by { π j } j = 1 k , while the radial basis function ϕ is used to map the data to high-dimension space, thus the objective function is defined as:
D ( { π j } j = 1 k ) = j = 1 k a π j w ( a ) | | ϕ ( x i ) m j | | 2 , m j = b i π c w ( b ) ϕ ( x i ) b i π c w ( b ) | π j |
where m j is the center of each cluster, where x i is instances of X t r a i n .
Assume that each cluster has the same weight, the Euclidean distance of each sample ϕ ( s ) to the cluster center m j is denoted as:
ϕ ( x i ) a i π c ϕ ( x i ) | π j | 2 = ϕ ( x i ) · ϕ ( x i ) 2 x j π c ϕ ( x i ) · ϕ ( x j ) | π c | + x j , x l π c ϕ ( x j ) · ϕ ( x l ) | π c | , ϕ ( x i ) · ϕ ( x j ) = ϕ ( x i · x j )
Figure 1 describes the detailed process of the IDEM. The IDEM is based on the means clustering method with radial basis kernel [33,34,35]. The original dataset is defined as the first hierarchy instance, and the IDEM mechanism is used to cluster this hierarchy instance to generate the second hierarchy instance. Then, the second hierarchy instance is clustered to generate the third hierarchy instance, until the H - t h hierarchy instances are generated, where H n + ( n + represents the set of positive integers). The number of newly generated instances is P % from the upper hierarchy instances.

2.2.2. Boosting Dual-Stage Feature Reduction Pair Ensemble Module

The typical characteristics of PD speech datasets are a small sample, having high repetition, high redundancy, and a certain amount of noise. According to the characteristics above, the boosting dual-stage feature reduction pair ensemble module (BD-SFREM) is designed to address this issue, which includes the dual-stage feature reduction pair (D-Spair) module and boosting ensemble module.
  • D-Spair module;
Suppose the number of instances of c t h is N c , then the total number of the instance k n = c = 1 C k n c . In the first step, D-Spair makes instances belonging to the same category closer together after mapping, that is, the within-class variance matrix of similar samples is reduced, the specific mathematical formula is expressed as follows:
m i n M c = 1 C M T x ( c ) - M T x w ( c ) ¯ | x ( c ) x w c = m i n M M T S S C M ,
where S S C = c = 1 C ( x ( c ) x w ( c ) ¯ ) ( x ( c ) x w ( c ) ¯ ) T stands for the variance matrix of the intraclass. x ( c ) ¯ = 1 k n c i = 1 k n c x i ( c ) denotes the center of c - t h class, and x w c the samples belonging to the same class.
Similarly, instances with different class labels are mapped as far apart as possible, that is the variance matrix between different classes should be increased as much as possible, and the specific mathematical formula is expressed as follows:
m a x M c = 1 C M T x b ( c ) ¯ - M T x b ¯ | x b ( c ) , x b X D C = m a x M M T S D C M ,
where S D C = c = 1 C ( x b ( c ) ¯ x b ¯ ) ( x b ( c ) ¯ x b ¯ ) T represents the scatter matrix between different classes. x b ¯ = 1 k n i = 1 k n x i stands for the center of the local part, and x b ( c ) ¯ = 1 N k i = 1 N k x i ( c ) the number of the c - t h class in the local part.
In addition, the nearest neighbor structure between samples is preserved during the mapping process (i.e., locality preservation), the specific mathematical formula could be described as follows:
c = 1 C i = 1 N c j = 1 N Z i j c M T x i ( c ) M T x j 2 | x i ( c ) , x j X t r a i n = M T X t r a i n A X t r a i n T M ,
where A = U Z represents a Laplacian matrix, U i j c = j Z i j c a diagonal matrix and the elements are the result of summing the diagonal elements of Z , Z = { Z 1             Z C } stands for an affinity matrix, U = { U 1               U C } .
Thus, the objective function of the feature extraction part of the D-Spair is designed to minimize the local variance matrix within the same category and maximize the variance matrix between different categories, while preserving the nearest neighbor structure of each instance. Based on the description of Equations (3)–(5), the mathematical expression of the feature extraction part is expressed as follows.
m i n M T r ( M T S S C M + M T X t r a i n A X t r a i n T M M T S D C M ) ,
Equation (6) could be transformed by the Lagrange multiplier method into Equation (7)
L ( M , λ ) = M T ( S S C + λ ( γ X t r a i n A X t r a i n T μ S D C ) ) M ,
Take the derivative of M to obtain the optimal solution.
L ( M , λ ) M | x i , x j X t r a i n = 0 ( μ S D C γ X t r a i n A X t r a i n T ) 1 S S C M = λ M ,
where λ and γ is the penalty factor. Equation (8) could be solved and the projection matrix M is obtained. The vector M R D × d is the generalized eigenvector of ( μ S D C γ X t r a i n A X t r a i n T ) 1 S S C and λ is the first d largest eigenvalues. The vector M k = ( m 1 , m 2 , , m k ) is composed of the first k eigenvectors of M .
Next, the vector M k is used to map X t r a i n , resulting in high-quality feature extraction, the mapped data are named X t r a i n . Define the sample set X t r a i n as S , divide X t r a i n into S + and S according to the class label of instances. An instance X i is randomly selected from S without putting back ( X i S ). According to the nearest neighbor criterion, an instance is also selected from S + and S respectively, which are noted as n e a r s t i + , n e a r s t i . Assume that X i has p features, i.e., each X i consists of p-dimensional vectors ( x i 1 , x i 2 , , x i p ), where x i j is the j t h feature of X i .
Similarly, W i denotes the feature weight of X i , which also consists of p-dimensional vectors ( w i 1 , w i 2 , , w i p ) , w i j denotes the feature weight of x i j . Same as X i , n e a r s t i + and n e a r s t i are also composed of p-dimensional vectors. Firstly, initialize the weights W i = ( w i 1 , w i 2 , , w i p ) = ( 0 , 0 , , 0 ) . Second, update w i j according to the distances of x i j from n e a r s t i + j and n e a r s t i j . The feature weights W i of a single instance are obtained by iterating p times. The feature weights W of all instances are obtained by iterating the above process m times. Finally, the higher quality features are selected with W that are useful to the training model. The related mathematical expressions are as follows:
w i j = w i j + x i j n e a r s t i j x i j n e a r s t i + j , j = 1 , 2 , , p ,
W = i = 1 m W i = i = 1 m ( w i 1 , w i 2 , , w i j , , w i p )   m
Then, these optimal features are used to train the classifier.
2.
Boosting ensemble module;
In the boosting ensemble module, the AdaBoost mechanism is used to combine various D-Spair, thereby constructing the boosting ensemble module. Finally, the pseudocode of BD-SFREM is shown as follows.
BD-SFREM
Input:
X t r a i n : training dataset (an N × D matrix) and the corresponding labels Y t r a i n (an N × 1 matrix)
X v a l i d : valid dataset (an N × D matrix) and the corresponding labels Y V a l i d (an N × 1 matrix)
X t e s t : test data (an N × D matrix) and the corresponding labels Y t e s t (an N × 1 matrix)
T : boosting module usage times
Threshold: flag of boosting module end
Output:
Final Prediction P K f i n a l _ i of the independent instance space
Begin
1: Given the data: ( x 1 , y 1 ) , , ( x m , y m )   , w h e r e   x i X t r a i n   , y i Y t r a i n = { 1 , + 1 } .
2: Initial weights of X t r a i n : D 1 ( i ) = 1 m
3: while  e t D S p a i r <= Threshold (where e t D S p a i r is error calculated from misclassified instances, t = 1 , 2 , T ) do
4:  Use the D-Spair module to obtain dual-stage features.
5:  Obtain weakclassifier, use dual-stage features: h t D S p a i r : X t r a i n { 1 , + 1 } w i t h   e r o r .
6: Obtain misclassification instances rate: e t D S p a i r = P ( h t   D S p a i r ( x i ) y i ) use X v a l i d , and obtain the misclassification instances.
7: Obtain weight of weak hypothesis α t   D S p a i r = 1 2 log 1 e t   D S p a i r e t   D S p a i r
8: Add misclassified instances to X t r a i n form a new training set.
9: End while
10: Obtain the final prediction use X t e s t :
   P K f i n a l _ i = s i g n ( t = 1 T α t   D S p a i r h t   D S p a i r ( x ) )
End

2.2.3. Hierarchical Space Instance Learning Mechanism

The implementation of the hierarchical space instance learning mechanism is based on the construction of the different hierarchy spaces and BD-SFREM. First, the IDEM mechanism is used to construct the deep hierarchy space. Then, the BD-SFREM is applied to different hierarchy spaces to perform the hierarchy space instance learning mechanism, and the results of the deep hierarchy spaces are integrated with the results of the original hierarchy spaces in order to improve the generalization ability of the model.
The pseudocode of the hierarchical space instance learning mechanism is shown as follows:
Hierarchical space instance learning mechanism
Begin
 1: For  d = 1: H  do
 2:  [ α , β ] = s i z e ( X t r a i n ) ; //get the dimension information of X t r a i n .
 3: The number of output instance clusters: k = α × P .
 4: Define the size of the cluster: D R k × D , where k stands for the number of centroid samples of cluster and D the dimensionality of the centroid samples.
   D ( { π j } j = 1 k ) = j = 1 k a π j w ( a ) | | ϕ ( x i ) m j | | 2
   m j = b i π c w ( b ) ϕ ( x i ) b i π c w ( b ) | π j |
   A ( k ) = 1 k j = 1 k a j
where A is the labels of the output samples after IDEM, and a j represents the labels that belong to the same category.
 5: Obtain the H - t h hierarchy: X t r a i n d = [ D , A ] .
  End for
 6: Obtain the different hierarchy space construct by IDEM: X t r a i n H = [ X t r a i n 1 , X t r a i n 2 , , X t r a i n d ] .
 7: Apply BD-SFREM on X t r a i n H .
 8: Obtain the P K f i n a l _ i of the different hierarchy space.
End

2.2.4. Overall Description of the Proposed Model

The overall description of the proposed model (HBD-SFREM) is described in this part. First, the different hierarchy space is constructed by IDEM. Second, a method of boosting dual-stage feature reduction process (boosting dual-stage feature reduction pair ensemble module) is established based on the proposed objective function. Finally, the above methods are applied to different hierarchy spaces to perform hierarchy space instance learning, then the results of the deep hierarchy spaces are integrated with the results of the original hierarchy instance spaces in order to improve the generalization ability of the algorithm. Figure 2 depicts the algorithm of this paper.

3. Results

3.1. Datasets

Three representative PD speech datasets and a self-collected PD speech dataset were utilized to validate the innovation of the HBD-SFREM.
LSVT: The LSVT dataset was founded by Professor Athanasios Tsanas of the University of Oxford ([email protected]). The role of this dataset was to assess effectiveness after rehabilitation treatment. In total, 14 subjects with PD (eight of them were male and six were female) participated in the entire data collection process. For more details, see [36].
PSDMTSR: The dataset consisted of a total sample of 40 subjects, in which 20 samples were from people with PD and 20 samples were from healthy people. For more details, see [37].
Parkinson: A total of 31 subjects’ speech data were collected in this dataset, 23 of whom were people with PD and eight of whom were healthy. For more details, see [38].
SelfData: The dataset was collected from a total of 31 subjects, 10 of whom suffered from PD and 21 of whom were healthy. Specifically, five of the 10 with PD were male and five were female; 12 of the 21 healthy subjects were male and nine were female. Thirteen voice segments (samples) were collected for each subject, and each voice segment consisted of 26 features. The SONY ICD-SX2000 recording tool was used for voice acquisition, and the recording tool was kept at a distance of 15 cm from the subject’s lips during the acquisition. Each subject was asked to read a specific piece of pronunciation material and the pronunciation made by each subject was recorded. The sampling was set to a frequency of 44.1 kHz and the resolution was set to 16 bits.
Three of the four datasets (LSVT, PSDMTSR, and Parkinson) are available to the public and can be downloaded from the UCI dataset repository created by the University of California, Irvine (www.archive.ics.uci.edu/ml/index.php (accessed on 24 November 2021)). The Chinese Army Medical University provided the SelfData dataset. Brief information about the datasets is shown in Table 1.

3.2. Experimental Environment

All experiments were conducted in MATLAB version 2017b, running on a PC with Windows 10, 64-bit and the CPU was intel(R) Core i5-2300 (2.80 GHz) as well as 8 GB of RAM. Praat is a computer speech processing software, which is used to analyze the speech features and extract speech features in this paper. The basic classifiers used in this study was the SVM. For optimal performance of the D-Spair, the affinity matrix Z was constructed using adjustable regularization coefficients λ and γ as well as adjustable kernel parameters t and adjusted from the given set { 10 4 , 10 3 , 10 2 , , 10 2 , 10 3 , 10 4 } . The dimension d of the subspace stack network was adjusted from the following set { 5 , 10 , 15 , } .The local ratios r b and r w were empirically chosen as 0.9 for this study. The parameter description and setting of the HBD-SFREM are shown in Table 2. In this study, all experiments were repeated ten times and the statistical results are reported.

3.3. Evaluation Criteria

The proposal of a new algorithm needs to be evaluated using a series of criteria. This study selected five model evaluation metrics to comprehensively evaluate the HBD-SFREM. They are: model prediction accuracy rate (Acc), model prediction correct rate (Pre), model recall rate (Rec), and comprehensive evaluation metrics F-score and G-mean. All the above evaluation metrics were constructed by a confusion matrix. The confusion matrix is a table that visualizes the model predictions [39]. The PD speech diagnosis studied in this paper is a binary classification problem, thus the confusion matrix was constructed as shown in Table 3.
Based on the above definition of the confusion matrix, the evaluation metrics (EM) of the algorithmic model studied in this paper could be defined as:
  • A c c = T P + T N T P + F P + F N + T N ;
  • P r e = T P T P + F P ;
  • R e c = T P T P + F N ;
  • S p e = T N F P + T N ;
  • G m e a n = R e c * S p e = T P T P + F N * T N F P + T N  
  • F s c o r e = 2 * P r e * R e c P r e + R e c   ;

3.4. Results and Analysis

In this part, the ablation method was used to verify the major innovation parts of the HBD-SFREM and then the representative feature extraction and feature selection algorithms were selected for comparison. Furthermore, existing feature reduction algorithms for PD speech recognition and two deep learning methods were also introduced in comparing with the proposed model. In the experiments, the hold-out method was used to divide the PD speech dataset: the dataset was randomly partitioned into three disjoint sets, including the training, validation, and test sets. As multiple speech segments (instances) were collected for each PD subject in the used dataset, instances from the same subject should be divided into the same set, to avoid the crossover of instances from the same subjects which could effectively respond to the authenticity of the results.

3.4.1. Verification of the Effectiveness of HBD-SFREM

This section introduces the verification results of the innovation of HBD-SFREM, including the results of the BD-SFREM and those of the hierarchical space instance learning mechanism. It is worth noting that since the construction of the different hierarchy space is the basis for its learning mechanism, the validity of the hierarchical space instance learning mechanism could further prove the effectiveness of the construction of the different hierarchy instance space.
  • Verification of the BD-SFREM;
This part gives the results of both D-Spair and BD-SFREM. Two of the feature processing methods were chosen for constructing the D-Spair, and these are local discriminant preservation projection (LDPP) and Relief. To give a much clearer presentation of the results, some symbols should be defined below. Only-FE represents the mere usage of LDPP to process the features and Only-FS the Relief. D-Spair stands for the results of D-Spair module, while BD-SFREM represents the result of boosting dual-stage feature reduction pair ensemble module. (B) represents the affinity matrix of the binary mode in the feature extraction and (H) the heat kernel mode. The experiments constructed in this section were performed in the original instance space.
As shown in Table 4, for LSVT, Parkinson, and PSDMTSR, the BD-SFREM had the best results in Acc, Pre, Rec, G-mean, and F-score regardless of diverse classifier, while for SelfData, the BD-SFREM had the best results in Acc and Pre. In addition, the results of D-Spair and BD-SFREM were much more accurate than those of the Only-FS and Only-FE. Thus, the D-Spair module and BD-SFREM are effective. Three of the four datasets used in this paper are unbalanced datasets. From the experiment results in the above table, the BD-SFREM module is helpful in handling imbalanced instance datasets, especially for the LSVT, PSDMTSR, and Parkinson datasets, and the advantages of the BD-SFREM are more obvious. Since the quality of the self-collected dataset was lower than that of the public dataset, its model effectiveness was accordingly reduced. However, it can be improved by the IDEM mechanism, which is illustrated in next section.
2.
Verification of the hierarchical space instance learning mechanism;
This section compares the results of the deep hierarchy instance space with those of the original instance space, and illustrates the effectiveness of the hierarchical space instance learning mechanism. (O) represents the results in the original instance space and (H) the results in the deep hierarchy instance space. Specifically speaking, Only-FS (O) stands for the results of the original instance space, and Only-FS (H) the results of the deep hierarchy instance space.
As shown in Table 5, the results of the deep hierarchy space instance (H) were improved for all PD speech datasets in diverse methods compared with the results of the original instance space (O). For LSVT, PSDMTSR, and SelfData, the results of (H) were obviously better than those of (O). For Parkinson, the results of (H) were also improved, though insignificantly. The last two columns of the table are the results of BD-SFREM, from which the results of (H) were obviously better than those of (O) in all datasets, with a maximum improvement rate of 9.53% on the LSVT dataset. Therefore, the hierarchical space instance learning mechanism in this paper is effective.
Table 6 shows the results of HBD-SFREM in different spaces (in which SVM (RFE) classifier is used). From the results in Table 6, we can see that the integrated output is always optimal, which further improves the generalization performance of the whole model.

3.4.2. Comparison with the Representative Feature Processing Model

In this section, some representative feature processing methods, like mRMR, Pvalue, SVMRFE, PCA, and LDA, were selected to compare with the proposed model (HBD-SFREM. Because deep learning also acts as major feature processing methods, its two representative methods, namely deep belief network (DBN) and stacked encoder (SE), were compared with HBD-SFREM in this paper. To facilitate the results presentation, some symbols should be defined in the first place. HBD-SFREM (B) stands for the results in mode B, and HBD-SFREM (H) the results of mode H.
As shown in Table 7, the results of HBD-SFREM outperformed the algorithm reference groups on ACC and Pre, regardless of diverse datasets and classifiers. For the LSVT dataset, HBD-SFREM outperformed those reference groups on Rec, G-mean, and F-score. For the PSDMTSR and Parkinson datasets, the results of HBD-SFREM in G-mean and F-score were more accurate than those of reference groups. For SelfData, the results of the HBD-SFREM on Acc and Pre were better than its reference groups. To demonstrate the advantages of HBD-SFREM more clearly, the results of using SVM (RBF) classifier on different datasets are given in Figure 3, where the HBD-SFREM has achieved the best accuracy. In summary, HBD-SFREM outperformed the reference groups in most cases, which further verifies the effectiveness of HBD-SFREM.
In addition, the ROC curves of all models on different datasets are shown in Figure 4. From Figure 4, we can see the area under curves (AUC) of HBD-SFREM is higher than the comparison models. It is worth noting that since SelfData is designed to simulate the real diagnosis environment of doctors, it is weaker in quality than the other three public datasets, but even under such conditions, the experimental result (AUC) shown in Figure 4 still proves that the HBD-SFREM is better than the comparative methods.

3.4.3. Comparison with Relevant PD Speech Recognition Methods

HBD-SFREM primarily improves the accuracy of PD speech recognition. This section aims to show the effectiveness of the HBD-SFREM by comparing it with other PD speech FR algorithms. The algorithm reference groups are as follows:
(1)
Relief-SVM [4]: Little used method in 2012, it involves first selecting four feature processing methods to process the features of the dataset, and then using Relief and SVM classifier with linear kernel function model (Relief-SVM) to learn to obtain a model.
(2)
mRMR classifier [3]: This method was used by Sakar in 2018. In [3], feature selection is first performed using mRMR and then the prediction results voting or stacking strategies of seven classifiers are integrated.
(3)
LDA-NN-GA [20]: This algorithm was proposed by L Ali and C Zhu in 2019. In [20]. The dataset is partitioned into a training set and a test set using the leave-one-out method (LOSO). Since each subject in the dataset contains multiple samples, the leave-one-out method here actually leaves all samples from one subject. Then, the feature dimension of the dataset is reduced using the LDA dimension reduction algorithm, and the BP neural network with genetic algorithm optimization is used to train the optimal prediction model (LDA-NN-GA).
(4)
FC-SVM [6]: This algorithm was proposed by Cigdem O in 2018. In [6], the Fisher criterion (FC)-based feature selection method is used to rank feature weights, finally, the first K useful features are selected based on a threshold to input the classifier (SVM with RBF) for training to obtain the model.
(5)
SFFS-RF [40]: This algorithm was proposed by Galaz Z in 2016. In this study, the sequential floating feature selection algorithm (SFFS) is adopted to process the data features, followed by inputting the processed results into the RF classifier to learn the prediction model.
Table 8 shows that HBD-SFREM always performed better than the other algorithms. For LSVT and Parkinson, the results were higher than those of the other algorithms, and the largest improvement rates in accuracy were 16.67% and 38.71%, respectively, demonstrating the advantages of HBD-SFREM. For SelfData and PSDMTSR, the results of HBD-SFREM were higher than the other algorithms in most cases, and the biggest improvement rates in accuracy were 22.37% and 22.27%, respectively. In addition, the experimental results of the comparison algorithms selected in this section were not as excellent as described in relevant studies, and the reason for this phenomenon is probably because the experimental conditions in this study were slightly different from those used in the reference group. For instance, the data diversity method differed from the method used by the authors in [20]. Additionally, the number of training data used in this study were less than that of [20]. In general, the larger the number of training data instances, the higher the prediction accuracy produced by the training model.

4. Discussion and Conclusions

HBD-SFREM has introduced an excellent dual-stage feature processing method that integrates the advantages of traditional feature extraction and feature selection algorithms. HBD-SFREM could generate high-quality features that are most useful to model learning, and thus achieve an early and accurate diagnosis of PD. These benefits can improve the identification accuracy as well as its stability. In addition, HBD-SFREM could be applied to small sample datasets of PD speech, including some unbalanced speech datasets. Experimental results demonstrate that the HBD-SFREM outperforms other existing algorithms of PD speech diagnosis.
Currently, publicly available PD speech datasets are relatively few. Three public PD speech datasets from UCI are introduced to validate the effectiveness as well as the innovativeness of the HBD-SFREM. In addition, this article also introduces the Chinese PD speech dataset collected by the authors. The experimental results indicate that HBD-SFREM achieves significantly better performance with the datasets studied. For all datasets, HBD-SFREM largely improves the diagnosis accuracy, especially on the Parkinson dataset. The degree of accuracy is enhanced by at least 19.36% compared to the other representative feature processing algorithms. At present, there are still relatively few fusion methods to study the selection and extraction of features for PD speech recognition, so this paper lays a good foundation for future research.
For future study, many more types of feature extraction and selection methods should be introduced into this research to develop and evaluate further effective algorithms. Besides, the improvement of the hierarchical space instance learning mechanism should be verified. As a framework algorithm, HBD-SFREM is different from other extraction and feature selection algorithms. Therefore, HBD-SFREM is rather valuable for reference and study in this field.

Author Contributions

M.Y.: Software, Formal Analysis, and Writing—Original Draft Preparation; J.M.: Writing—Original Draft Preparation; P.W.: Methodology, Software, and Supervision; Z.H. (Zhiyong Huang): Supervision; Conceptualization, Methodology, Software, and Writing—Reviewing and Editing; Y.L.: Supervision; Conceptualization, Methodology, Software, Writing—Reviewing and Editing, and Writing—Original Draft Preparation; H.L.: Methodology and Writing—Original Draft Preparation; Z.H. (Zeeshan Hameed): Data Curation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Chongqing (cstc2020jcyj-msxmX0100, cstc2020jscx-gksb0010, cstc2020jscx-msxm0369); the Basic and Advanced Research Project in Chongqing(cstc2020jscxfyzx0212,cstc2020jscxmsxm0369,cstc2020jcyjmsxmX0523,Grant2020ZY023659);the Chongqing Natural Science Foundation (Grant No cstc2020jcyj-msxmX0641);and in part by the Research on Tumor Metastasis and Individualized Diagnosis and Treatment in Cancer Hospital Affiliated to Chongqing University Chongqing Key Laboratory Open Project Fund General Project; the Chongqing Social Science Planning Project (2018YBYY133); and the Special Project of Improving Scientific and Technological Innovation Ability of the Army Medical University (2019XLC3055).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Three publicly available datasets are used in this paper and they can be downloaded from: www.archive.ics.uci.edu/ml/index.php (accessed on 24 November 2021). The code used in this study can be found at: https://github.com/YangMingYaoo/Deep-Embedded-hybrid-FR-algorithm-about-parkinsons-disease.git, (accessed on 24 November 2021).

Acknowledgments

We are grateful for the support of Natural Science Foundation of Chongqing (cstc2020jcyj-msxmX0100, cstc2020jscx-gksb0010, cstc2020jscx-msxm0369); the Chongqing Natural Science Foundation (Grant No cstc2020jcyj-msxmX0641); and in part by the Research on Tumor Metastasis and Individualized Diagnosis and Treatment in Cancer Hospital Affiliated to Chongqing University Chongqing Key Laboratory Open Project Fund General Project; the Chongqing Social Science Planning Project (2018YBYY133); and the Special Project of Improving Scientific and Technological Innovation Ability of the Army Medical University (2019XLC3055).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arkinson, C.; Walden, H. Parkin function in Parkinson’s disease. Science 2018, 360, 267–268. [Google Scholar] [CrossRef] [Green Version]
  2. Tsanas, A.; Little, M.; McSharry, P.; Ramig, L. Accurate Telemonitoring of Parkinson’s Disease Progression by Noninvasive Speech Tests. IEEE Trans. Biomed. Eng. 2010, 57, 884–893. [Google Scholar] [CrossRef] [Green Version]
  3. Sakar, C.; Serbes, G.; Tunc, H.; Nizam, H.; Sakar, B.; Tutuncu, M.; Aydin, T.; Isenkul, M.; Apaydin, H. A comparative analysis of speech signal processing algorithms for Parkinson’s disease classification and the use of the tunable Q-factor wavelet transform. Appl. Soft Comput. J. 2018, 74, 255–263. [Google Scholar] [CrossRef]
  4. Tsanas, A.; Little, M.A.; McSharry, P.E.; Spielman, J.; Ramig, L.O. Novel Speech Signal Processing Algorithms for High-Accuracy Classification of Parkinson’s Disease. IEEE Trans. Bio-Med Eng. 2012, 59, 1264–1271. [Google Scholar] [CrossRef] [Green Version]
  5. Tuncer, T.; Dogan, S.; Acharya, U.R. Automated detection of Parkinson’s disease using minimum average maximum tree and singular value decomposition method with vowels. Biocybern. Biomed. Eng. 2019, 40, 211–220. [Google Scholar] [CrossRef]
  6. Cigdem, O.; Demirel, H. Performance analysis of different classification algorithms using different feature selection methods on Parkinson’s disease detection. J. Neurosci. Methods 2018, 309, 81–90. [Google Scholar] [CrossRef] [PubMed]
  7. Kursun, O.; Gumus, E.; Sertbas, A.; Favorov, O.V. Selection of vocal features for Parkinson’s Disease diagnosis. Int. J. Data Min. Bioinform. 2012, 6, 144–161. [Google Scholar] [CrossRef] [PubMed]
  8. Zuo, W.L.; Wang, Z.Y.; Liu, T.; Chen, H.L. Effective detection of Parkinson’s disease using an adaptive fuzzy k-nearest neighbor approach. Biomed. Signal Process. Control 2013, 8, 364–373. [Google Scholar] [CrossRef]
  9. Hariharan, M.; Polat, K.; Sindhu, R. A new hybrid intelligent system for accurate detection of Parkinson’s disease. Comput. Methods Programs Biomed. 2014, 113, 904–913. [Google Scholar] [CrossRef] [PubMed]
  10. Rovini, E.; Maremmani, C.; Moschetti, A.; Esposito, D.; Cavallo, F. Comparative Motor Pre-clinical Assessment in Parkinson’s Disease Using Supervised Machine Learning Approaches. Anim. Biomed. Eng. 2018, 46, 2057–2068. [Google Scholar] [CrossRef] [PubMed]
  11. Sakar, C.O.; Kursun, O. Telediagnosis of Parkinson’s disease using measurements of dysphonia. J. Med Syst. 2010, 34, 591–599. [Google Scholar] [CrossRef]
  12. Peker, M.; Baha Şen Delen, D. Computer-Aided Diagnosis of Parkinson’s Disease Using Complex-Valued Neural Networks and mRMR Feature Selection Algorithm. J. Healthc. Eng. 2015, 6, 281–302. [Google Scholar] [CrossRef] [Green Version]
  13. Benba, A.; Jilbab, A.; Hammouch, A. Hybridization of best acoustic cues for detecting persons with Parkinson’s disease. In Proceedings of the 2014 Second World Conference on Complex Systems (WCCS14), Agadir, Morocco, 10–12 November 2014. [Google Scholar]
  14. Shirvan, R.A.; Tahami, E. Voice analysis for detecting Parkinson’s disease using genetic algorithm and KNN classification method. In Proceedings of the 2011 18th Iranian Conference of Biomedical Engineering (ICBME), Tehran, Iran, 14–16 December 2011. [Google Scholar]
  15. Nasreen, S. A survey of feature Selection and feature extraction techniques in machine learning. In Proceedings of the 2014 Science and Information Conference, London, UK, 27–29 August 2014. [Google Scholar]
  16. Liu, Y.; Li, Y.; Tan, X.; Wang, P.; Zhang, Y. Local discriminant preservation projection embedded ensemble learning based dimensionality reduction of speech data of Parkinson’s disease. Biomed. Signal Process. Control 2021, 63, 102165. [Google Scholar] [CrossRef]
  17. El Moudden, I.; Ouzir, M.; Elbernoussi, S. Feature selection and extraction for class prediction in dysphonia measures analysis: A case study on Parkinson’s disease speech rehabilitation. Technol. Health Care 2017, 25, 693–708. [Google Scholar] [CrossRef] [PubMed]
  18. Moudden, I.E.; Ouzir, M.; Elbernoussi, S. Automatic Speech Analysis in Patients with Parkinson’s Disease using Feature Dimension Reduction. In Proceedings of the 3rd International Conference on Mechatronics and Robotics Engineering (ICMRE 2017), Paris, France, 8–12 February 2017. [Google Scholar]
  19. Yang, S.; Zheng, F.; Luo, X.; Cai, S.; Wu, Y.; Liu, K.; Wu, M.; Chen, J.; Krishnan, S. Effective Dysphonia Detection Using Feature Dimension Reduction and Kernel Density Estimation for Patients with Parkinson’s Disease. PLoS ONE 2014, 9, e88825. [Google Scholar]
  20. Ali, L.; Zhu, C.; Zhang, Z.; Liu, Y. Automated Detection of Parkinson’s Disease Based on Multiple Types of Sustained Phonations using Linear Discriminant Analysis and Genetically Optimized Neural Network. IEEE J. Transl. Eng. Health Med. 2019, 7, 1–10. [Google Scholar] [CrossRef] [PubMed]
  21. Chen, H.L.; Huang, C.C.; Yu, X.G.; Xu, X.; Sun, X.; Wang, G.; Wang, S.J. An efficient diagnosis system for detection of Parkinson’s disease using fuzzy k-nearest neighbor approach. Expert Syst. Appl. 2013, 40, 263–271. [Google Scholar] [CrossRef]
  22. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  23. Tenenbaum, J.B.; Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed]
  24. Adeli, E.; Wu, G.; Saghafi, B.; An, L.; Shi, F.; Shen, D. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease. Sci. Rep. 2017, 7, 41069. [Google Scholar] [CrossRef]
  25. Avci, D.; Dogantekin, A. An Expert Diagnosis System for Parkinson Disease Based on Genetic Algorithm-Wavelet Kernel-Extreme Learning Machine. Parkinson’s Dis. 2016, 2016, 5264743. [Google Scholar] [CrossRef] [Green Version]
  26. Grover, S.; Bhartia, S.; Yadav, A.; Seeja, K.R. Predicting Severity of Parkinson’s Disease Using Deep Learning. Procedia Comput. Sci. 2018, 132, 1788–1794. [Google Scholar] [CrossRef]
  27. Vásquez-Correa, J.C.; Arias-Vergara, T.; Orozco-Arroyave, J.R.; Eskofier, B.; Klucken, J.; Nöth, E. Multimodal Assessment of Parkinson’s Disease: A Deep Learning Approach. IEEE J. Biomed. Health Inform. 2019, 23, 1618–1630. [Google Scholar] [CrossRef]
  28. He, X. Locality Preserving Projections; University of Chicago: Chicago, IL, USA, 2005. [Google Scholar]
  29. Yu, G.; Peng, H.; Wei, J.; Ma, Q. Enhanced locality preserving projections using robust path-based similarity. Neurocomputing 2011, 74, 598–605. [Google Scholar] [CrossRef]
  30. Zhang, L.; Wang, X.; Huang, G.B.; Liu, T.; Tan, X. Taste recognition in E-Tong using Local Discriminant Preservation Projection. IEEE Trans. Cybern. 2018, 49, 947–960. [Google Scholar] [CrossRef]
  31. Chen, Y.; Xu, X.H.; Lai, J.H. Optimal locality preserving projection for face recognition. Neurocomputing 2011, 74, 3941–3945. [Google Scholar] [CrossRef]
  32. Fu, Y.W.; Chen, H.L.; Chen, S.J.; Li, L.J.; Hunang, S.S.; Cai, Z.N. Hybrid Extreme Learning Machine Approach for Early Diagnosis of Parkinson’s Disease. In Proceedings of the International Conference in Swarm Intelligence, Hefei, China, 17–20 October 2014; Springer: Cham, Switzerland, 2014; pp. 342–349. [Google Scholar]
  33. Coates, A.; Ng, A.Y. Learning feature representations with K-means. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; pp. 561–580. [Google Scholar] [CrossRef]
  34. Grigorios, F.T.; Aristidis, C.L. The global kernel k-means algorithm for clustering in feature space. IEEE Trans. Neural Netw. 2009, 20, 1181–1194. [Google Scholar] [CrossRef]
  35. He, L.; Zhang, H. Kernel K-means sampling for nyström approximation. IEEE Trans. Image Process. 2018, 27, 2108–2120. [Google Scholar] [CrossRef] [PubMed]
  36. Tsanas, A.; Little, M.A.; Fox, C.; Ramig, L.O. Ramig: Objective automatic assessment of rehabilitative speech treatment in Parkinson’s disease. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 22, 181–190. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Sakar, B.E.; Isenkul, M.E.; Sakar, C.O.; Sertbas, A.; Gurgen, F.; Delil, S.; Apaydin, H.; Kursun, O. Collection and Analysis of a Parkinson Speech Dataset with Multiple Types of Sound Recordings. IEEE J. Biomed. Health Inform. 2013, 17, 828–834. [Google Scholar] [CrossRef]
  38. Little, M.; McSharry, P.; Hunter, E.; Spielman, J.; Ramig, L. Suitability of Dysphonia Measurements for Telemonitoring of Parkinson’s Disease. IEEE Trans. Bio-Med. Eng. 2009, 56, 1015–1022. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Deng, X.; Liu, Q.; Deng, Y.; Mahadevan, S. An improved method to construct basic probability assignment based on the confusion matrix for classification problem. Inf. Sci. 2016, 340–341, 250–261. [Google Scholar] [CrossRef]
  40. Galaz, Z.; Mekyska, J.; Mzourek, Z.; Smekal, Z.; Rektorova, I.; Eliasova, I.; Kostalova, M.; Mrackova, M.; Berankova, D. Prosodic analysis of neutral, stress-modified and rhymed speech in patients with Parkinson’s disease. Comput. Methods Programs Biomed. 2016, 127, 301–317. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow chart of IDEM.
Figure 1. Flow chart of IDEM.
Diagnostics 11 02312 g001
Figure 2. Graphical description of the proposed model.
Figure 2. Graphical description of the proposed model.
Diagnostics 11 02312 g002
Figure 3. Comparison Results Using Different Datasets.
Figure 3. Comparison Results Using Different Datasets.
Diagnostics 11 02312 g003
Figure 4. (a) Description of the ROC curves on LSVT; (b) description of the ROC curves on Parkinson; (c) description of the ROC curves on PSDMTSR; (d) description of the ROC curves on SelfData.
Figure 4. (a) Description of the ROC curves on LSVT; (b) description of the ROC curves on Parkinson; (c) description of the ROC curves on PSDMTSR; (d) description of the ROC curves on SelfData.
Diagnostics 11 02312 g004aDiagnostics 11 02312 g004b
Table 1. Basic information about datasets.
Table 1. Basic information about datasets.
DatabaseAttributes
PatientsHealthy PeopleInstancesFeaturesClassesReference
LSVT1401263092[36]
PSDMTSR20201040262[37]
Parkinson248195232[38]
SelfData1021403262--
For the LSVT dataset, ’healthy people’ means the number of patients whose clinicians allowed ongoing rehabilitation, and ’patients’ mean the number of patients whose clinicians did not allow rehabilitation. For the SelfData dataset, the ‘healthy people’ denote the number of patients treated with the relevant medication and the ‘patients’ mean the number of patients treated with the relevant medication before.
Table 2. Parameter description and setting.
Table 2. Parameter description and setting.
ParameterMeaningParameter Setting
H Layers of deep instance space2
n Numbers of independent instance space3
λ Penalty factor for M T ( γ X A X T S D C ) M 10−4,10−3,…,104
γ Penalty factor for M T X A X T M 10−4,10−3,…,104
t Kernel parameter for affinity matrix10−4,10−3,…,104
k Number of nearest neighbor instances in Z 5
d Dimension after FR5,10,15,…
P Instance output rate of each hierarchical0.8
Table 3. Confusion matrix for PD speech recognition problem.
Table 3. Confusion matrix for PD speech recognition problem.
Prediction Labels
Positive (P)Negative (N)
Real labelPositive (P) T P F N
Negative (N) F P T N
Table 4. Results of the validation of the algorithm using the ablation method (%).
Table 4. Results of the validation of the algorithm using the ablation method (%).
MethodsOnly-FSOnly-FE (B)Only-FE (H)D-Spair (B)D-Spair (H)BD-SFREM (B)BD-SFREM (H)
Datasets/
EM/Classifier
LSVTACCSVM (linear)78.5778.5778.5783.3383.3385.7192.86
SVM (RBF)76.1973.8171.4383.3385.7183.3390.48
preSVM (linear)95.2482.7691.3088.8988.8996.00100.00
SVM (RBF)95.0090.4878.5792.0092.3196.0096.15
RecSVM (linear)71.4385.7175.0085.7185.7185.7189.29
SVM (RBF)67.8667.8678.5782.1485.7185.7189.29
G-meanSVM (linear)81.4474.2380.1882.0782.0789.2194.49
SVM (RBF)79.3876.2667.0183.9185.7189.2191.05
F-scoreSVM (linear)81.6384.2182.3587.2787.2790.5794.34
SVM (RBF)79.1777.5578.5786.7988.8990.5792.59
PSDMTSRAccSVM (linear)45.1954.8152.5655.7756.4158.0758.33
SVM(RBF)46.7955.7755.7755.7756.7357.3758.97
PreSVM (linear)42.1157.8954.8860.9860.4265.4361.61
SVM (RBF)46.2159.1859.78591861.2960.1860.45
RecSVM (linear)45.1935.2628.8532.0537.1833.9744.23
SVM (RBF)47.4437.1835.2637.1836.5443.5951.92
G-meanSVM (linear)40.7451.2046.1950.4753.0352.8056.60
SVM (RBF)46.1652.5851.8652.5853.0255.6958.55
F-ScoreSVM (linear)31.8743.8237.8242.0246.0344.7351.49
SVM (RBF)42.3645.6744.3545.6745.7850.5655.86
ParkinsonAccSVM (linear)59.6866.1366.1367.7479.0396.7795.16
SVM (RBF)61.2959.6861.2967.7462.9083.8779.03
PreSVM (linear)90.32100.00100.00100.00100.00100.00100.00
SVM (RBF)84.2190.3284.2182.6180.0097.6293.02
RecSVM (linear)56.0058.0058.0060.0074.0096.0094.00
SVM (RBF)64.0056.0064.0076.0072.0082.0080.00
G-meanSVM (linear)64.8176.1676.1677.4686.0297.9896.95
SVM (RBF)56.5764.8156.5750.3342.4386.7077.46
F-scoreSVM (linear)69.1473.4273.4275.0085.0697.9696.91
SVM (RBF)72.7369.1472.7379.1775.7989.1386.02
Self DataAccSVM (linear)47.5544.7645.4558.0455.2458.7458.74
SVM (RBF)45.4543.3645.4546.8546.1549.6558.04
PreSVM (linear)35.0633.3334.1538.8936.3640.5440.00
SVM (RBF)33.7532.5334.5235.0034.5734.3833.33
RecSVM (linear)51.9251.9253.8526.9230.7728.8526.29
SVM (RBF)51.9251.9255.7753.8553.8542.3115.38
G-meanSVM (linear)48.3745.9546.7945.1846.1546.7745.50
SVM (RBF)46.5644.6946.9748.0450.3947.7335.60
F-scoreSVM (linear)41.8640.6041.7931.8233.3333.7132.18
SVM (RBF)40.9140.0042.6542.4242.1137.9324.05
Table 5. Verification of hierarchy space instance learning mechanism (%).
Table 5. Verification of hierarchy space instance learning mechanism (%).
MethodsOnly-FSOnly-FE (B)D-Spair (B)BD-SFREM (B)
Datasets/
EM/Classifier
(O)(H)(O)(H)(O)(H)(O)(H)
LSVTACCSVM (linear)78.5780.9578.5785.7183.3385.7185.7185.71
SVM (RBF)76.1983.3373.8183.3383.3385.7183.3392.86
preSVM (linear)95.2495.4582.7695.8388.8995.8396.00100.00
SVM (RBF)95.0092.0090.4888.8992.0092.3196.0096.30
RecSVM (linear)71.4375.0085.7182.1485.7182.1485.7185.71
SVM (RBF)67.8682.1467.8685.7182.1485.7185.7192.86
G-meanSVM (linear)81.4483.4574.2387.3482.0787.3488.1085.71
SVM (RBF)79.3883.9176.2682.0783.9185.7188.1092.86
F-scoreSVM (linear)81.6384.0084.2188.4687.2788.4689.2188.89
SVM (RBF)79.1786.7977.5587.2786.7988.8989.2194.55
PSDMTSRAccSVM (linear)45.1948.0854.8158.0155.7757.6958.0157.05
SVM(RBF)47.4452.8855.7757.3755.7757.3757.3760.26
PreSVM (linear)42.1147.8657.8962.8960.9865.7965.4360.19
SVM(RBF)64.2256.9259.1860.3659.1860.3660.1861.11
RecSVM (linear)25.6442.9535.2639.1032.0532.0533.9741.67
SVM(RBF)44.8723.7237.1842.9537.1842.9543.5956.41
G-meanSVM (linear)40.7447.8051.2054.8450.4751.6852.8054.94
SVM(RBF)58.0144.1152.5855.5352.5855.5355.6960.13
F-ScoreSVM (linear)31.8745.2743.8248.2242.0243.1044.7349.24
SVM(RBF)52.8333.4845.6750.1945.6750.1950.5658.67
ParkinsonAccSVM (linear) 59.6872.5866.1374.1967.7482.2696.7785.48
SVM (RBF)61.2967.7459.6870.9767.7467.7483.8785.48
PreSVM (linear)90.3286.67100.00100.00100.00100.00100.00100.00
SVM (RBF)84.2185.7190.3279.6382.6182.6197.62100.00
RecSVM (linear)56.0078.0058.0068.0060.0078.0096.0082.00
SVM (RBF)64.0072.0056.0086.0076.0076.0082.0082.00
G-meanSVM (linear)64.8162.4576.1682.4677.4688.3297.9890.55
SVM (RBF)56.5760.0064.8126.7750.3350.3386.7090.55
F-scoreSVM (linear)69.1482.1173.4280.9575.0087.6497.9690.11
SVM (RBF)72.7378.2669.1482.6979.1779.1789.1390.11
Self DataAccSVM (linear)47.5548.2544.7645.4558.0447.5558.7462.94
SVM(RBF)45.4546.1543.3645.4546.8550.3549.6549.65
PreSVM (linear)35.0635.5333.3333.7538.8935.0540.5447.06
SVM (RBF)33.7533.7732.5334.1535.0035.8234.3832.14
RecSVM (linear)51.9251.9251.9251.9226.9251.9228.8515.38
SVM (RBF)51.9250.0051.9253.8553.8546.1542.3134.62
G-meanSVM (linear)48.3748.9545.9546.5645.1848.3646.7737.23
SVM (RBF)46.5646.8844.6946.7948.0449.3447.7344.90
F-scoreSVM (linear)41.8642.1940.6040.9131.8241.8633.7123.19
SVM (RBF)40.9140.3140.0041.7942.4240.3437.9333.33
Table 6. Verification of the integration output (%).
Table 6. Verification of the integration output (%).
Hierarchical SpaceOriginal
Space
Deep
Space 1
Deep
Space 2
P F i n a l
Datasets/EM
LSVTACC83.3392.8685.7192.86
pre92.0096.3095.8396.30
Rec82.1492.8682.1492.86
G-mean83.9192.8687.3492.86
F-score86.7994.5588.4694.55
PSDMTSRACC57.3754.4960.2660.26
pre60.1856.3661.1161.11
Rec43.5939.7456.4156.41
G-mean55.6952.4560.1360.13
F-score50.5646.6258.6758.67
ParkinsonACC83.8782.2685.4893.55
pre97.6267.901.0097.62
Rec82.0092.0082.0094.00
G-mean86.7061.9190.5592.83
F-score89.1389.3290.1195.92
SelfDataACC49.6549.6556.6456.64
pre34.3832.1440.7440.74
Rec42.3134.6242.3142.31
G-mean47.7344.9052.3752.37
F-score37.9333.3341.5141.51
Table 7. Comparison with representative feature processing algorithms (%).
Table 7. Comparison with representative feature processing algorithms (%).
MethodsmRMRPvalueSVMRFEPCALDADBNSEHBD-SFREM
Datasets/EM/Classifier (B)(H)
LSVTACCSVM (linear)76.1983.3373.8183.3378.5778.5771.4388.1092.86
SVM (RBF)83.3380.9583.3369.0580.9592.8690.48
preSVM (linear)100.00100.0094.7495.6591.3095.2494.44100.00100.00
SVM (RBF)100.0091.6783.87100.0095.4596.3096.15
RecSVM (linear)64.2975.0064.2978.5775.0071.4360.7189.2989.29
SVM (RBF)75.0078.5792.8653.5775.0092.8689.29
G-meanSVM (linear)80.1886.6077.2685.4280.1881.4475.0894.4894.49
SVM (RBF)86.6082.0777.2673.1983.4592.8691.05
F-scoreSVM (linear)78.2685.7176.6086.2782.3581.6373.9194.3494.34
SVM (RBF)85.7184.6288.1469.7784.0094.5592.59
PSDMTSRAccSVM (linear)48.0846.4752.5657.0548.4047.6060.2661.2266.35
SVM(RBF)56.4156.4155.7756.7353.8560.2661.22
PreSVM (linear)47.8645.9956.2563.1047.1346.2764.2969.2372.57
SVM(RBF)62.8259.0957.3858.2754.7661.1164.00
RecSVM (linear)42.95.40.3823.0833.9726.2829.8164.1540.3852.56
SVM(RBF)31.4141.6744.8747.4444.2356.4151.28
G-meanSVM (linear)47.8046.0743.5152.1843.0544.1558.5857.5664.90
SVM(RBF)50.5754.4554.6955.9652.9860.1360.41
F-ScoreSVM (linear)45.2743.0032.7344.1733.7436.2653.7351.0160.97
SVM(RBF)41.8848.8750.3652.3048.9458.6756.94
ParkinsonAccSVM (linear) 72.5882.2680.6564.5269.3564.5267.7496.7795.16
SVM (RBF)72.5879.0372.5861.2975.8193.5598.39
PreSVM (linear)100.00100.0080.65100.0096.97100.0087.50100.00100.00
SVM (RBF)100.0093.0278.9576.00100.0097.92100.00
RecSVM (linear)74.0078.00100.0056.0064.0056.0070.0096.0094.00
SVM (RBF)66.0080.0090.0076.0070.0094.0098.00
G-meanSVM (linear)86.0288.3200.0074.8376.5974.8363.9097.9896.95
SVM (RBF)81.2477.4600.0000.0083.6792.8398.99
F-scoreSVM (linear)85.0687.6489.2971.7977.1171.7977.7897.9696.91
SVM (RBF)79.5286.0284.1176.0082.3595.9298.99
Self DataAccSVM (linear)48.2544.7660.1448.2545.4541.2661.5464.3461.54
SVM(RBF)47.5545.4551.7545.4545.4556.6466.43
PreSVM (linear)35.9034.1236.8435.9035.8734.0042.8653.8542.86
SVM (RBF)35.8034.5233.3334.8835.8740.7470.00
RecSVM (linear)53.8555.7713.4653.8563.4665.3817.3113.4617.31
SVM (RBF)55.7755.7732.6957.6963.4642.3113.46
G-meanSVM (linear)48.2544.7660.1448.2545.4542.3838.7635.4638.76
SVM (RBF)49.2546.3134.1849.2547.2452.3736.08
F-scoreSVM (linear)45.3244.6941.8345.0446.4344.7424.6621.5424.66
SVM (RBF)43.0842.3419.7243.0845.8341.5122.58
Table 8. Comparison of PD speech dataset processing algorithms (%).
Table 8. Comparison of PD speech dataset processing algorithms (%).
DatasetsLSVTPSDMTSRParkinsonSelfData
Methods
HBD-SFREM (B)SVM (linear)92.8661.2296.7764.34
SVM (RBF)92.8660.2693.5556.64
HBD-SFREM (H)SVM (linear)92.8666.3595.1661.54
SVM (RBF)90.4861.2298.3966.43
Relief [4]SVM (linear)78.5745.1959.6847.55
SVM (RBF)76.1947.4461.2945.45
mRMR [3]SVM (linear)76.1948.0872.5848.25
SVM (RBF)83.3356.4172.5847.55
LDA-NN-GA [20]81.4261.3880.8363.00
ReliefF-FC-SVM (RBF) [6]82.5461.3881.6762.67
SFFS-RF [40]81.6460.6380.8360.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, M.; Ma, J.; Wang, P.; Huang, Z.; Li, Y.; Liu, H.; Hameed, Z. Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson’s Disease Speech Data. Diagnostics 2021, 11, 2312. https://doi.org/10.3390/diagnostics11122312

AMA Style

Yang M, Ma J, Wang P, Huang Z, Li Y, Liu H, Hameed Z. Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson’s Disease Speech Data. Diagnostics. 2021; 11(12):2312. https://doi.org/10.3390/diagnostics11122312

Chicago/Turabian Style

Yang, Mingyao, Jie Ma, Pin Wang, Zhiyong Huang, Yongming Li, He Liu, and Zeeshan Hameed. 2021. "Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson’s Disease Speech Data" Diagnostics 11, no. 12: 2312. https://doi.org/10.3390/diagnostics11122312

APA Style

Yang, M., Ma, J., Wang, P., Huang, Z., Li, Y., Liu, H., & Hameed, Z. (2021). Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson’s Disease Speech Data. Diagnostics, 11(12), 2312. https://doi.org/10.3390/diagnostics11122312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop