Next Article in Journal
End-to-End Deep Neural Networks and Transfer Learning for Automatic Analysis of Nation-State Malware
Previous Article in Journal
Teager Energy Entropy Ratio of Wavelet Packet Transform and Its Application in Bearing Fault Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Pulmonary Hypertension Using Entropy Measure Analysis of Heart Sound Signal

1
Department of Biomedical Engineering, Dalian University of Technology, Dalian 116024, China
2
College of Information and Communication Engineering, Dalian Minzu University, Dalian 116024, China
3
School of Control Science and Engineering, Shandong University, Jinan 250100, China
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(5), 389; https://doi.org/10.3390/e20050389
Submission received: 3 April 2018 / Revised: 16 May 2018 / Accepted: 19 May 2018 / Published: 21 May 2018

Abstract

:
This study introduced entropy measures to analyze the heart sound signals of people with and without pulmonary hypertension (PH). The lead II Electrocardiography (ECG) signal and heart sound signal were simultaneously collected from 104 subjects aged between 22 and 89. Fifty of them were PH patients and 54 were healthy. Eleven heart sound features were extracted and three entropy measures, namely sample entropy (SampEn), fuzzy entropy (FuzzyEn) and fuzzy measure entropy (FuzzyMEn) of the feature sequences were calculated. The Mann–Whitney U test was used to study the feature significance between the patient and health group. To reduce the age confounding factor, nine entropy measures were selected based on correlation analysis. Further, the probability density function (pdf) of a single selected entropy measure of both groups was constructed by kernel density estimation, as well as the joint pdf of any two and multiple selected entropy measures. Therefore, a patient or a healthy subject can be classified using his/her entropy measure probability based on Bayes’ decision rule. The results showed that the best identification performance by a single selected measure had sensitivity of 0.720 and specificity of 0.648. The identification performance was improved to 0.680, 0.796 by the joint pdf of two measures and 0.740, 0.870 by the joint pdf of multiple measures. This study showed that entropy measures could be a powerful tool for early screening of PH patients.

1. Introduction

Pulmonary hypertension (PH) is a hemodynamic and pathophysiological condition in which pulmonary artery pressure rises above a certain threshold. PH is a potentially fatal disease that can cause right heart failure [1]. If the PH is not diagnosed in a timely manner and no pretreatments are done actively, it will have serious consequences. In the early stage of PH, the symptoms are not apparent to be perceived by physicians but the mechanical activity of the heart has quietly changed and it can be reflected to some degree in the heart sound signals [2,3]. Therefore, the analysis of cardiac acoustic sound could play an important role in the initial diagnosis of PH.
Previous studies have shown that the time interval between the aortic component (A2) and the pulmonary component (P2) of the second heart sound (S2), as well as the dominant frequency of P2, increases in PH patients and they bring potential values to noninvasive diagnosis of PH [4,5]. The A2-P2 splitting interval (SI) was revealed to have links to pulmonary arterial pressure (PAP) [4,5,6]. Unfortunately, it was difficult to separate S2s, A2s, P2s from a heart sound recording reliably and precisely, as well as the splitting interval between A2 and P2 [7]. Some researchers turned their interests to extract features of the heart sounds that could provide relevant diagnostic information. For example, M Elgendi, P Bobhate and S Jain found useful features from time and frequency domains of a heart sound signal for distinguishing people with and without PH [8,9,10]. Another research team used machine learning algorithms to build models to classify people with and without PH based on features of the heart sounds [11]. Unlike the previous methods, this study introduced the entropy measures to analyze the heart sound signal of PH patients.
It is well-known that entropy is a quantitative measure of regularity of a sequence. It is reasonable to use entropy to investigate regularity differences of the heart sound features between people with and without PH. Eleven features of heart sounds were extracted and three entropy measures were used in this study to calculate their entropy values of each feature sequence. Typically, three entropy measures, i.e., sample entropy, fuzzy entropy and fuzzy measure entropy, were considered. Statistical analysis ranked the entropy measures according to the significance level. For identification purpose, the probability density function (pdf) of each entropy measure was built by kernel density estimation, as well as the joint pdf of two and multiple entropy measures. The results indicated that the entropy measures could be powerful to discriminate a PH patient from a healthy subject.

2. Materials and Methods

2.1. Data Collection

One-hundred and four subjects participated in this study. All subjects were given informed and written consent. The involved subjects were divided into two groups. One was the PH group (50 patients in the second attached hospital of Dalian Medical University) and the other was the healthy control group (54 subjects enrolled from Dalian University of Technology and Shandong University). The basic information of the PH patient group and the healthy control group are shown in Table 1 and Table 2. A subject was asked to rest for 10 min prior to data collection. In the progress of data collection, each subject lied on his back quietly in a bed. The heart sound signal—Electrocardiography (ECG) lead II signal—were simultaneously recorded for at least five minutes at a sampling frequency of 2 kHz (PowerLab 8/35, ADInstruments, Sydney, Australia). A heart sound microphone sensor (MLT201, ADInstruments, Sydney, Australia) was placed at the left third intercostal space. The echocardiography was used to measure the pulmonary artery pressure for PH patients. The cardiologist in the hospital made the PH diagnose based on the measured data and symptoms. The PH patient group excludes those with atrial fibrillation or pacemaker. The subjects in the healthy control group were checked by ECG, echocardiography and a routine blood test to verify healthy condition. The R waves of the ECG signals, picked up by the Pan–Tomkins algorithm [12], were used to determine the beginning of a cardiac cycle. The first heart sound (S1) and the second heart sound (S2) were segmented by the Shannon energy envelope [13]. A manual interference was necessary to verify the segmentation if the envelope-based segmentation algorithm failed, especially for the PH patients’ heart sound signals where heavy murmurs often occurred. Figure 1 and Figure 2 were two examples of heart sound segmentation for a PH subject and a healthy subject.

2.2. Feature Extraction

The time interval of a heart sound and cycle duration were considered as features in this study. The sound interval was the time duration from the start to the end of the heart sound, and the cycle duration was the time duration from the start of S1 to the start of the next S1, as illustrated in Figure 3. The authors defined “Int_s1” as the time interval of S1, “Int_s2” as the time interval of S2, and “Car_cycle” as the cardiac cycle duration. An “Int_s1”, an “Int_s2” and a “Car_cycle” can be detected from one cardiac cycle signal of the heart sound recording. So, feature sequences of “Int_s1”, “Int_s2” and “Car_cycle” were produced from a heart sound recording.
Another four features were from the power spectral domain of S1 and S2. The Burg algorithm was used to calculate the power spectral density (PSD) because of its smoothing character where the order of the autoregressive model was empirically set as 4 [14]. The maximum magnitude of the power spectral density of S2, denoted as “Max_pow_s2” and the corresponding frequency value, denoted as “Max_f_s2” were extracted as two features, as illustrated in Figure 4. The “Max_pow_s1”and “Max_f_s1” were extracted in a similar way.
Another four features considered in energy domain were the average energy of S1 (denoted as “Ener_s1”), the average energy of S2 (denoted as “Ener_s2”), the average Shannon energy of S1 (denoted as “ShanEner_s1”) and the average Shannon energy of S2 (denoted as “ShanEner_s2”), as given in the following
Ener _ S 1 = 1 L s 1 i = 1 L s 1 ( s 1 ( i ) ) 2
Ener _ S 2 = 1 L s 2 i = 1 L s 2 ( s 2 ( i ) ) 2
ShanEner _ S 1 = 1 L s 1 i = 1 L s 1 ( s 1 ( i ) ) 2 log [ ( s 1 ( i ) ) 2 ]
ShanEner _ S 2 = 1 L s 2 i = 1 L s 2 ( s 2 ( i ) ) 2 log [ ( s 2 ( i ) ) 2 ]
where s 1 ( i ) and s 2 ( i ) were the S1 and S2 detected by the envelope-based algorithm. L s 1 and L s 2 were the number of sampling points of the heart sounds.
The 11 features used in this study were summarized in Table 3. An example to illustrate some of the feature sequences extracted from a PH subject and a healthy subject were given in Figure 5.

2.3. Entropy

2.3.1. Sample Entropy

As a measure of the complexity of a digital sequence, the entropy measures have been widely used in physiological signals such as electrocardiogram signals, heart rate variability signals and heart sound signals [15,16,17]. To quantify the regularity of the short and noisy physiological time series, approximate entropy (ApEn) was firstly proposed by Pincus [18]. The calculation of ApEn was very easy and it was quickly applied to the various clinical cardiovascular studies [19,20]. However, ApEn lacked consistency and the results depended on data length. In the meanwhile, it could cause a biased estimation for the complexity of physiological signals because of self-matching. In order to overcome the defects above, Richman and Moorman proposed the sample entropy (SampEn) [21], which was relatively consistent and dependent on data length less. Here SampEn was used to calculate the entropy value of a feature sequence as an entropy measure. The algorithm to calculate SampEn is as the follows.
For a feature sequence x ( i ) ( 1 i N ), given the embedding dimension m and threshold parameter r to form the vector X i m
X i m = { x ( i ) , x ( i + 1 ) , , x ( i + m 1 ) } ,   1 i N m ,
where N is the number of feature samples. The vector X i m represents m consecutive samples of x ( i ) started from index i . Let d i , j m denote the distance between X i m and X j m based on the maximum absolute difference
d i , j m = d ( X i m , X j m ) = max k = 0 m 1 | x ( i + k ) x ( j + k ) | ,
where d ( . ) is the function to calculate the maximum absolute difference. Then B i m ( r ) can be calculated as
B i m ( r ) = j = 1 , j i N m A ( r d i , j m ) / ( N m 1 ) ,
where A ( ) is the Heaviside function
A ( r d i , j m ) = { 1 , d i , j m r 0 , d i , j m > r ,
B m ( r ) = ln [ i = 1 N m B i m ( r ) / ( N m ) ] .
Then increase m by 1 and repeat the steps above to get B m + 1 ( r ) . Finally, SampEn is defined by
S a m p E n ( m , r , N ) = B m ( r ) B m + 1 ( r ) .
In this study, the embedding dimension m = 2 and threshold parameter r is set to be 0.2 times of the sequence’ standard deviation [20].

2.3.2. Fuzzy Entropy

SampEn is based on the Heaviside function of the classical sets which is a two-state classifier that judges two vectors as either ‘‘similar’’ or ‘‘dissimilar’’ with no intermediate states and which influences the statistical stability of results. To enhance the statistical stability, another entropy named fuzzy entropy (FuzzyEn) was proposed and the Heaviside function was replaced by the Zadeh fuzzy sets that provided a smooth similarity classifier [22,23]. The FuzzyEn and the SampEn algorithm are basically the same, except that (7) and (8) are replaced by
B i m ( r ) = j = 1 , j i N m A ( d i , j m ) / ( N m 1 ) ,
A ( d i , j m ) = exp ( ln ( d i , j m / r ) 2 ) .
Finally, FuzzyEn is defined by
F u z z y E n ( m , r , N ) = B m ( r ) B m + 1 ( r ) .
In this study, the embedding dimension m = 2 and threshold parameter r is set to be 0.2 times of the sequence’ standard deviation [20].

2.3.3. Fuzzy Measure Entropy

Fuzzy entropy focuses only on the local characteristics of the sequence and the global fluctuation may affect the results. In order to combine both the local and global similarity of the time series, the fuzzy measure entropy (FuzzyMEn) method was proposed [24], which is described as follows.
For a feature sequence x ( i ) ( 1 i N ), given the embedding dimension m to form the local vector X L i m and the global vector X G i m
X L i m = { x ( i ) , x ( i + 1 ) , , x ( i + m 1 ) } x ¯ ( i ) X G i m = { x ( i ) , x ( i + 1 ) , , x ( i + m 1 ) } x ¯ ,   1 i N m .
The vector X L i m represents m consecutive samples in x ( i ) starting from the index i and it removes the local baseline x ¯ ( i ) , which is defined as
x ¯ ( i ) = 1 m k = 0 m 1 x ( i + k ) ,   1 i N m .
The vector X G i m indicates m consecutive samples in x ( i ) from the index i and it removes the global mean value x ¯ of x ( i ) , which is defined as:
x ¯ = 1 N i = 1 N x ( i ) .
Then the distance of the local sequence between X L i m and X L j m is defined as d L i , j m and the distance of the global sequence between X G i m and X G j m is defined as d G i , j m . The d L i , j m and d G i , j m are calculated as
d L i , j m = d ( X L i m , X L j m ) = max k = 0 m 1 | ( x ( i + k ) x ¯ ( i ) ) ( x ( j + k ) x ¯ ( j ) ) | d G i , j m = d ( X G i m , X G j m ) = max k = 0 m 1 | ( x ( i + k ) x ¯ ) ( x ( j + k ) x ¯ ) | .
Then compute the local similarity degree D L i , j m ( n L , r L ) between X L i m and X L j m by the fuzzy function μ L ( d L i , j m , n L , r L ) , and calculate the global similarity degree D G i , j m ( n G , r G ) between X G i m and X G j m by the fuzzy function μ G ( d G i , j m , n G , r G ) .
D L i , j m ( n L , r L ) = μ L ( d L i , j m , n L , r L ) = exp ( ( d L i , j m ) n L / r L ) D G i , j m ( n G , r G ) = μ G ( d G i , j m , n G , r G ) = exp ( ( d G i , j m ) n G / r G ) ,
where the r L and r G are the thresholds, which are both set as 0.2 times of the sequences’ standard deviation [20]. The n L and n G are weights of sequences’ similarity, which are set to be 3 and 2, respectively [24]. Define the function ϕ L m ( n L , r L ) and ϕ G m ( n G , r G ) as
ϕ L m ( n L , r L ) = 1 N m i = 1 N m ( 1 N m 1 j = 1 , j i N m D L i , j m ( n L , r L ) ) ϕ G m ( n G , r G ) = 1 N m i = 1 N m ( 1 N m 1 j = 1 , j i N m D G i , j m ( n G , r G ) ) ,   1 i , j N m .
Increase m by 1 and repeat the steps above to get ϕ L m + 1 ( n L , r L ) and ϕ G m + 1 ( n G , r G ) . Then the fuzzy local measure entropy ( F u z z y L M E n ) and the fuzzy global measure entropy ( F u z z y G M E n ) are defined as
F u z z y L M E n ( m , n L , r L , N ) = ln ( ϕ L m + 1 ( n L , r L ) / ϕ L m ( n L , r L ) ) F u z z y G M E n ( m , n G , r G , N ) = ln ( ϕ G m + 1 ( n G , r G ) / ϕ G m ( n G , r G ) ) .
Finally, FuzzyMEn is defined as
F u z z y M E n ( m , n L , r L , n G , r G , N ) = F u z z y L M E n ( m , n L , r L , N )                  + F u z z y G M E n ( m , n G , r G , N ) .
This study used the three entropies to measure the regularity difference of a feature sequence between PH patients and healthy subjects. Eleven feature sequences were extracted from a heart sound recording and sample entropy, fuzzy entropy and fuzzy measure entropy were calculated for each feature sequence. In total, three entropies combined with 11 feature sequences yield 33 entropy measures for a heart sound recording. For example, the sample entropy measure of a Max_pow_s2 sequence is abbreviated as SampEn_Max_pow_s2. Similarly, the fuzzy entropy measure of a Max_pow_s2 sequence is abbreviated as FuzzyEn_Max_pow_s2.

2.4. Statistical Tests

It is known from the mechanism of heart sound generation that the proposed heart sound features reflect physiological and pathological condition of heart hemodynamics. So, the complexity embedded in a heart sound feature sequence, measured by the entropy, will vary by body condition. The entropy value is hopefully different in different body condition even for the same subject. On the other hand, the estimated entropy value also changes in noisy environments. The authors in this study took the entropy values as random numbers and made an assumption that the entropy values had different distribution for PH subject and healthy subject. A question arises as to how to evaluate the significance of an entropy measure between the PH group and the healthy group. In this study, the Mann–Whitney U test was used to achieve this purpose. The Mann–Whitney U test is a nonparametric test applicable to non-Gaussian distribution data. Based on this test, one can draw a conclusion that any two entropy measures come from the same group as long as they have an equal median [25]. The significance level of each entropy measure calculated by the test was used. In a typical case, a threshold of 0.05 is set for the significance level. However, this study considered multiple tests of statistical significance on the same data. As such, a Bonferroni correction was further used to improve the significance test. If the significance level was less than the threshold, it is unlikely that the two entropy measures are from the same group, i.e., they are from the PH group and health group, respectively.

2.5. Probability Density Function of an Entropy Measure Fitted by Kernel Density Estimation

To characterize the random numbers, it is necessary to build the probability density function (pdf). For a single entropy measure, its pdf can be constructed by the nonparametric kernel density estimation (KDE) based on a Gaussian kernel function [26]. Suppose e ( i ) ( 1 i n ) is an entropy measure of a feature sequence and n is the number of a subject group. The pdf of an entropy measure can be estimated by
f ^ ( e ) = 1 n h i = 1 n K 1 ( e e ( i ) h ) ,
where h is a smoothing parameter called bandwidth and K 1 ( · ) is the single-variable kernel function
K 1 ( μ ) = 1 2 π exp ( 1 2 μ 2 ) .
The Silverman’s rule of thumb for the bandwidth is used to get the best one [27]
h = 1.06 σ n 1 / 5 .
A joint pdf of multiple entropy measure can be generalized from (20). Let e = [ e 1   e 2     e d ] be a d -dimensional entropy measure vector. The joint pdf f d ( e ) of the entropy measures can be obtained by
f d ( e ) = 1 n | H | 1 / 2 i = 1 n K 2 ( | H | 1 / 2 ( e e i ) ) ,
where e i = ( e i 1 , e i 2 , , e i d ) T , i = 1 , 2 , n . K 2 ( · ) is the multiple-variable kernel function which is a symmetric multivariate density. The standard multivariate normal kernel was used here.
K 2 ( μ ) = ( 2 π ) d / 2 | H | 1 / 2 exp ( 1 2 μ T H 1 μ ) ,
where μ is a d-dimensional vector. H is the d × d bandwidth (or smoothing) matrix which is symmetric and positive definite. Similarly, the Silverman’s rule of thumb gives the best bandwidth matrix H [27,28,29]:
H i i = ( 4 d + 2 ) 1 d + 4 n 1 d + 4 σ i ,
where σ i is the standard deviation of the i-th variable and H i j = 0 , i j .

2.6. Identification of a PH Patient from a Healthy Subject Using the pdf Based on the Bayes’ Decision Rule

The pdf of PH group and the health control group were estimated by Section 2.5. If the Mann–Whitney U test shows that an entropy measure is significant between the two groups, the pdf of the entropy measure for PH group must be somewhat different from that of health group. The authors proposed algorithms to identify an unknown subject based on Bayes’ decision rule, as seen in Table 4 and Table 5.
An unknown subject could be classified correctly or incorrectly. So, four cases may occur in the results. A case that a PH subject is correctly identified as a PH patient is called true positive (TP). A case that a healthy subject wrongly is wrongly identified as a PH patient is called false positive (FP). A case that a healthy subject is correctly identified as a healthy subject is called true negative (TN). A case that a PH subject is wrongly identified as a healthy subject is called false negative (FN). Using the above identification algorithms, an unknown subject was identified. The number of TP, FN, TN and FP are defined as num_TP, num_FN, num_TN, and num_FP, respectively. The sensitivity and specificity is therefore calculated as [30]
S e n = n u m _ T P n u m _ T P + n u m _ F N ,
S p e = n u m _ T N n u m _ T N + n u m _ F P
where S e n and S p e represent the values of sensitivity and specificity. Then the overall evaluation index was defined by the accuracy to measure the identification performance
A c c = n u m _ T P + n u m _ T N n u m _ T P + n u m _ T N + n u m _ F P + n u m + F N .

3. Results and Discussions

3.1. Significance of the Features and Reduction of the Age Confounding Factor

To investigate the significance of the 33 entropy measures, the Mann–Whitney U test was performed, as shown in Table 6. The least p value, 1.20 × 109, was of SampEn_max_pow_s2 (No. 13). The measures related to max_pow_s2 had very low p values around 10−9 E-09 (No. 13–No. 15). The p values of the measures from No. 1 to No. 30 were small. The maximum p value, 4.22 × 101, was of SampEn_Int_s1 (No. 31). The p values of entropy measures related to Int_s1 (No. 31–33) were much greater than others, which meant that these measures were not significant between PH and healthy group.
In this study, the average age of PH group and health group was 69.4 and 32.6, respectively, as seen in Table 1 and Table 2. There was significant difference. There is doubt whether the significance of the proposed measures is likely attributed to age difference or not. As such, age is a confounding factor. To investigate the effect of the age confounding factor, the authors performed Pearson correlation analysis between the measures and the age values. The correlation coefficients (CC.) were given in Table 6. The measure, SampEn_max_f_s2 (No. 16), had maximum correlation coefficient of 0.43. The minimum correlation coefficient came from FuzzyEn_max_f_s1 (No. 8), 0.14. A further check showed that the measures from No. 12 to No. 29 had somewhat high correlation coefficients. The correlation analysis showed that the age did contribute measure difference between the two groups to some degree. To reduce the age confounding factor, it is reasonable to discard those measures which have high correlation coefficients with age. The authors set a threshold, 0.30, for the coefficients. The measures with absolute coefficients less than 0.30 are believed to be weakly correlated with age. Therefore, nine measures indicated by bold text (No. 1, 2, 4, 5, 6, 7, 8, 9 and 11) were selected as candidate measures for identification purpose.
A threshold for significance level is usually set as 0.05. However, it is known that a Bonferroni correction is a safeguard against multiple tests of statistical significance on the same data [31]. So, the significance level in this study was set as 0.05/9, i.e., 5.6 × 103. That is to say, an entropy measure will be safely significant if its significance level is less than 5.6 × 103. It was found that the selected nine measures (except No. 9 which was close to the threshold) were highly significant between the two groups. It revealed that the selected nine measures were unlikely from the same group. They were reasonable to be used for identification.

3.2. Identification Performance of a Single Entropy Measure

For a selected entropy measure, both pdfs of the PH group and the health control group were fitted respectively by KDE as given in Section 2.5. The leave-one-out cross-validation was used to evaluate the identification performance. That is, one subject was taken out and the other was involved in pdf estimation. The curves of estimated pdf pairs were shown in Figure 6. The red and black curves were for health group and PH group, respectively. A subject can be identified by the Algorithm 1 proposed in Section 2.6. The corresponding Receiver Operating Characteristic (ROC) curves were given in Figure 7. The summary of identification performance was showed in Table 7 in term of sensitivity, specificity, accuracy and area under curve (AUC).
It was seen in Table 7 that the entropy measures related to Ener_s1 had the best identification performance whose AUC was higher than 0.70. For example, the sensitivity, specificity and accuracy of SampEn_Ener_s1 was 0.720, 0.648 and 0.683. The performance of entropy measures related to Max_f_s1 was the worst. The identification performance of the entropy measures can also be highly reflected by the overlapping of the pdf pairs. A careful check to the pdf pairs shown in Figure 6 may indicate that the pdf pairs related to Ener_s1 overlapped least as seen in Figure 6a1,a2. However, the pdf pairs related to Max_f_s1 had the maximum overlapping as in Figure 6c1–c3. A close look at Table 6 and Table 7 showed that the entropy measures were ranked in almost the same order. This evidence showed that the significance level drawn from the Mann–Whitney U test and the pdf overlapping were consistent and compatible. This proved that the Mann–Whitney U test was a useful tool to find effective entropy measures for identification purposes.

3.3. Identification Performance of Two Joint-Entropy Measures

To improve identification performance, it is natural to use joint pdf of two selected entropy measures. A selection of any two measures out of the nine measures yields 36 combinations. The authors investigated the identification performance of the combinations one by one based on leave-one-out validation. The joint pdfs of the best six and the worst three combinations were shown in Figure 8. So, a subject can be classified as a PH patient or a healthy subject by Algorithm 2 based on the joint pdf proposed in Section 2.6 where the entropy measure vector is a two-dimensional vector. Visual observation to Figure 8a–f indicated the joint pdf of PH patients and healthy subjects had less overlapping where the peaks can be seen separately and clearly. However, the joint pdfs in Figure 8g–i overlapped to a high degree and the peaks were much closer than those in Figure 8a–f. So, a conclusion could be drawn from the observation that the identification performance of the joint measures of Figure 8a–f was much better than those in Figure 8g–i. The quantitative performance indicators of the nine combinations were shown in Table 8. The nine joint measures in Table 8 corresponded to the joint pdfs in Figure 8a–i. The best performance was achieved by the joint of SampEn_Ener_s1 and SampEn_ShanEner_s2 which resulted in that the sensitivity, specificity, accuracy and AUC were 0.680, 0.796, 0.740, and 0.770. The quantitative performance in Table 8 confirmed the observation in Figure 8. Comparison between Table 7 and Table 8 revealed that the identification performance got improvement by joint pdf of two entropy measures.

3.4. Identification Performance of the Joint pdf of Multiple Entropy Measures

Following the reasoning of the joint pdf of two entropy measures, the authors tried to use joint multiple measures to obtain better identification performance. Similarly, the joint pdf of multiple measures of PH group and health group can be built by the KDE estimation based on leave-one-out cross-validation. Then, the proposed Algorithm 2 was used to identify a subject as a patient or a healthy one where the entropy measure vector was a multidimensional vector. Unfortunately, the multidimensional pdfs cannot be visualized normally. The authors investigated the identification performance of all possible combinations of the entropy measures. The top ten results were shown in Table 9. These results were better than those in Table 8 (joint pdf of two entropy measures) and Table 7 (pdf of a single entropy measure). For example, in the first line in Table 9, the joint five entropy measures yielded the best identification performance with sensitivity of 0.740, specificity of 0.870, accuracy of 0.808, and AUC of 0.829. Similar excellent performance can also be obtained by other combinations, such as line 2 to line 10 in Table 9. A careful check to Table 8 and Table 9 revealed that the identification performance can be improved by joint multiple measures, as the authors expected.

3.5. Summary and Discussions

Pulmonary hypertension (PH) is often diagnosed late because early identification is very difficult even after the onset of symptoms. Therefore, early diagnosing the clinical PH is needed. Heart sound, acoustic vibration generated by the interaction between the heart hemodynamics and valves, chambers and great vessels, is an important physiological signal needed exploration for PH detection further. In the previous studies, four papers have studied the similar topic [8,9,10,11]. The four papers analyzed multiple heart sound features for normal subjects and patients. But, three studies [8,9,10] were for children and only one study [11] was for adults. The classification performance for children showed a sensitivity of 0.93 and a specificity of 0.92. The identification accuracy from the fourth paper for adults was 0.77. The authors verified for adults that the regularity of heart sound features of a PH patient varied comparing to that of a healthy subject. Sample entropy, fuzzy entropy and fuzzy measure entropy were calculated for the proposed 11 feature sequences. The identification was achieved by the difference of entropy measure probability occurred in PH group and health group. The results showed that the difference in entropy measure probability between the PH group and health group did exist. A subject can be classified by the estimated pdf based on the Bayes’ decision rule and the classification performance could be improved by joint entropy measures.
The authors wish to emphasize that the pdf was estimated based on leave-one-out cross-validation. In other words, only one subject was taken out and the remainders were involved in pdf training and then the taken-out subject was identified by the Bayes’ decision rule. On the other hand, the heart sound signals of PH patients or healthy subjects were all collected at the left third intercostal space. However, the heart sound signals collected from other sites were not considered in this study.

4. Conclusions

This study used the entropy measures (SampEn, FuzzyEn and FuzzyMEn) to evaluate the regularity of the proposed heart sound features of people with and without PH. The detection by Mann–Whitney U test found that the entropy measures related to, Max_pow_s1, Max_pow_s2, Max_f_s1, Max_f_s2, Ener_s1, Ener_s2, ShanEner_s1, ShanEner_s2, Car_cycle, Int_s2 were significant between the PH group and health group. The entropy measures related to Int_s1 were not significant. On the other hand, we conducted correlation analysis between an entropy measure and age value to reduce the age confounding factor. Nine entropy measures were selected as effective measures for identification purpose. Further, the pdf of each single entropy measure or the joint pdf of combined entropy measures were built by KDE estimation. Identification was achieved by the proposed Algorithm 1 and Algorithm 2 based on Bayes’ decision rule. In the case of single measure, the simulation result showed that SampEn_Ener_s1 yielded the best identification performance where the sensitivity and specificity were 0.720 and 0.648. Meanwhile, in the case of joint two measures, the identification performance was improved by joint pdf of SampEn_Ener_s1 and SampEn_ShanEner_s2 where the sensitivity and specificity were 0.680 and 0.796. Multiple measures were also combined to improve identification performance. The simulation results showed that a combination of five out of nine measures performed best where the sensitivity and specificity were up to 0.740 and 0.870. The identification process proposed in this manuscript could have potential application in early screening for PH.

Author Contributions

H.T. and T.L. conceived and designed the experiments; Y.J. performed the experiments. H.T. and Y.J. analyzed the data; X.W. contributed 14 subjects’ data to improve the study; H.T. and Y.J. wrote the paper.

Funding

This research was funded in part by [the National Natural Science Foundation of China] grant number [61471081], [61601081]; and [Fundamental Research Funds for the Central Universities] grant number [DUT15QY60], [DUT16QY13], [DC201501056], [DCPY2016008].

Conflicts of Interest

The authors declare no conflict of interest.

Ethical statement

The data collection from PH patients and healthy subject has been approved by the ethics committee of the second hospital of Dalian Medical University, the ethics committee of Dalian University of Technology, and the ethics committee of Qilu hospital of Shandong University, respectively. All subjects were given informed and written consent.

Data availability

The data used in this study is available to those who wish to reproduce the results.

References

  1. Humbert, M.; Sitbon, O.; Chaouat, A.; Bertocchi, M.; Habib, G.; Gressin, V.; Yaïci, A.; Weitzenblum, E.; Cordier, J.F.; Chabot, F.; et al. Survival in patients with idiopathic, familial, and anorexigen-associated pulmonary arterial hypertension in the modern management Era. Circulation 2010, 122, 156–163. [Google Scholar] [CrossRef] [PubMed]
  2. Rich, S.; Dantzker, D.R.; Ayres, S.M.; Bergofsky, E.H.; Brundage, B.H.; Detre, K.M.; Fishman, A.P.; Goldring, R.M.; Groves, B.M.; Koerner, S.K.; et al. Primary pulmonary hypertension. A national prospective study. Ann. Intern. Med. 1987, 107, 216–223. [Google Scholar] [CrossRef] [PubMed]
  3. Leatham, A.; Weitzman, D. Auscultatory and phonocardiographic signs of pulmonary stenosis. Br. Heart J. 1957, 19, 303–317. [Google Scholar] [CrossRef] [PubMed]
  4. Longhini, C.; Baracca, E.; Brunazzi, C.; Vaccari, M.; Longhini, L.; Barbaresi, F. A new noninvasive method for estimation of pulmonary arterial pressure in mitral stenosis. Am. J. Cardiol. 1991, 68, 398–401. [Google Scholar] [CrossRef]
  5. Chen, D.; Pibarot, P.; Durand, L.G. Estimation of pulmonary artery pressure by spectral analysis of the second heart sound. Am. J. Cardiol. 1996, 78, 785–789. [Google Scholar] [CrossRef]
  6. Xu, J.; Durand, L.G.; Pibarot, P. A new, simple, and accurate method for non-invasive estimation of pulmonary arterial pressure. Heart 2002, 88, 76–80. [Google Scholar] [CrossRef] [PubMed]
  7. Nigam, V.; Priemer, R. A dynamic method to estimate the time split between the A2 and P2 components of the S2 heart sound. Physiol. Meas. 2006, 27, 553–567. [Google Scholar] [CrossRef] [PubMed]
  8. Elgendi, M.; Bobhate, P.; Jain, S.; Rutledge, J.; Coe, Y.; Zemp, R.; Schuurmans, D.; Adatia, I. Time-domain analysis of heart sound intensity in children with and without pulmonary artery hypertension: A pilot study using a digital stethoscope. Pulm. Circul. 2014, 4, 685–695. [Google Scholar] [CrossRef] [PubMed]
  9. Elgendi, M.; Bobhate, P.; Jain, S.; Guo, L.; Rutledge, J.; Coe, Y.; Zemp, R.; Schuurmans, D.; Adatia, I. Spectral analysis of the heart sounds in children with and without pulmonary artery hypertension. Int. J. Cardiol. 2014, 173, 92–99. [Google Scholar] [CrossRef] [PubMed]
  10. Elgendi, M.; Bobhate, P.; Jain, S.; Guo, L.; Kumar, S.; Rutledge, J.; Coe, Y.; Zemp, R.; Schuurmans, D.; Adatia, I. The unique heart sound signature of children with pulmonary artery hypertension. Pulm. Circul. 2015, 5, 631–639. [Google Scholar] [CrossRef] [PubMed]
  11. Dennis, A.; Arand, P.; Arand, P.; Dan, V. Noninvasive diagnosis of pulmonary hypertension using heart sound analysis. Comput. Biol. Med. 2010, 40, 758–764. [Google Scholar] [CrossRef] [PubMed]
  12. Pan, J.; Tompkins, W.J. A real-time QRS detection algorithm. IEEE Trans. Biomed. Eng. 1985, 32, 230–236. [Google Scholar] [CrossRef] [PubMed]
  13. Liang, H.; Lukkarinen, S.; Hartimo, I. Heart sound segmentation algorithm based on heart sound envelogram. Comput. Cardiol. 1997, 24, 105–108. [Google Scholar]
  14. Luo, F.; Duan, P.P.; Wu, S.J. Research on short sequence power spectrum estimates based on the Burg algorithm. J. Xidian Univ. 2005, 32, 724–728. [Google Scholar]
  15. Alcaraz, R.; Rieta, J.J. A novel application of sample entropy to the electrocardiogram of atrial fibrillation. Nonlinear Anal. Real World Appl. 2010, 11, 1026–1035. [Google Scholar] [CrossRef]
  16. Lake, D.E.; Richman, J.S.; Griffin, M.P. Sample entropy analysis of neonatal heart rate variability. Am. J. Physiol. Regul. Integr. Comp. Physiol. 2002, 283, 789–797. [Google Scholar] [CrossRef] [PubMed]
  17. Schmid, S.E.; Hansen, J.; Hansen, C.H. Comparison of sample entropy and AR-models for heart sound-based detection of coronary artery disease. In Proceedings of the 2010 Computing in Cardiology, Belfast, UK, 26–29 September 2010; pp. 385–388. [Google Scholar]
  18. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [PubMed]
  19. Dawes, G.S.; Moulden, M.; Sheil, O. Approximate entropy, a statistic of regularity, applied to fetal heart rate data before and during labor. Obst. Gynecol. 1992, 80, 763–768. [Google Scholar]
  20. Castiglioni, P.; Rienzo, M.D. How the threshold “r” influences approximate entropy analysis of heart-rate variability. Comput. Cardiol. 2008, 35, 561–564. [Google Scholar]
  21. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circul. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, W.; Zhuang, J.; Yu, W.; Wang, Z. Measuring complexity using FuzzyEn, ApEn, and SampEn. Med. Eng. Phys. 2009, 31, 61–68. [Google Scholar] [CrossRef] [PubMed]
  23. Xie, H.B.; He, W.X.; Liu, H. Measuring time series regularity using nonlinear similarity-based sample entropy. Phys. Lett. A 2008, 372, 7140–7146. [Google Scholar] [CrossRef]
  24. Liu, C.; Li, K.; Zhao, L.; Zheng, D.; Liu, C.; Liu, S. Analysis of heart rate variability using fuzzy measure entropy. Comput. Biol. Med. 2013, 43, 100–108. [Google Scholar] [CrossRef] [PubMed]
  25. Mann, H.B.; Whitney, D.R. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 1947, 18, 50–60. [Google Scholar] [CrossRef]
  26. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  27. Sott, D.W. Density Estimation for Statistics and Data Analysis; Chapman and Hall: London, UK, 1986; Volume 39, pp. 296–297. [Google Scholar]
  28. Silverman, B.W. Multivariate Density Estimation: Theory, Practice, and Visualization; John Wiley & Sons Inc.: New York, NY, USA, 1992. [Google Scholar]
  29. Zhang, X.; King, M.L.; Hyndman, R.J. A Bayesian approach to bandwidth selection for multivariate kernel density estimation. Comput. Stat. Data Anal. 2006, 50, 3009–3031. [Google Scholar] [CrossRef]
  30. Sharma, D.; Yadav, U.B.; Sharma, P. The concept of sensitivity and specificity in relation to two types of errors and its application in medical research. J. Reliab. Stat. Stud. 2009, 2, 53–58. [Google Scholar]
  31. Curtin, F.; Schulz, P. Multiple correlations and Bonferroni’s correction. Biol. Psychiatry 1998, 44, 775–777. [Google Scholar] [CrossRef]
Figure 1. A part of collected signals from a PH subject. The subject was a female aged 76 with 158 cm in height and 64 kg in weight. Her pulmonary systolic pressure was 31 mmHg. (a) The ECG signal. The green circles showed the detected R waves; (b) the heart sound signal. The short and long red vertical lines indicated the detected S1s and S2s, respectively.
Figure 1. A part of collected signals from a PH subject. The subject was a female aged 76 with 158 cm in height and 64 kg in weight. Her pulmonary systolic pressure was 31 mmHg. (a) The ECG signal. The green circles showed the detected R waves; (b) the heart sound signal. The short and long red vertical lines indicated the detected S1s and S2s, respectively.
Entropy 20 00389 g001
Figure 2. A part of collected signals from a healthy subject who was a male aged 67 with 165 cm in height and 63 kg in weight. (a) The ECG signal. The green circles showed the detected R waves; (b) the heart sound signal. The short red vertical lines indicated the detected S1s and the long ones showed the S2s.
Figure 2. A part of collected signals from a healthy subject who was a male aged 67 with 165 cm in height and 63 kg in weight. (a) The ECG signal. The green circles showed the detected R waves; (b) the heart sound signal. The short red vertical lines indicated the detected S1s and the long ones showed the S2s.
Entropy 20 00389 g002
Figure 3. Illustration of the definition of “Int_s1”, “Int_s2”, and “Car_cycle”.
Figure 3. Illustration of the definition of “Int_s1”, “Int_s2”, and “Car_cycle”.
Entropy 20 00389 g003
Figure 4. Illustration of features extracted from power spectral domain. “Max_pow_s2” was the maximum magnitude of power spectral density of the second heart sound and “Max_f_s2” was the corresponding frequency.
Figure 4. Illustration of features extracted from power spectral domain. “Max_pow_s2” was the maximum magnitude of power spectral density of the second heart sound and “Max_f_s2” was the corresponding frequency.
Entropy 20 00389 g004
Figure 5. An example to illustrate some feature sequences of a PH patient and a healthy subject. The PH patient was a 76-year-old female of 158 cm in height and 64 kg in weight. Her pulmonary systolic blood pressure was 31 mmHg. The healthy subject was a 67-year-old male of 165 cm in height and 63 kg in weight. (a1,a2) Sequence of Int_s1 in ms; (b1,b2) sequence of Int_s2 in ms; (c1,c2) sequence of Max_pow_s2; (d1,d2) sequence of Max_f_s2 in Hz; (e1,e2) sequence of Ener_s1; and (f1,f2) sequence of Ener_s2.
Figure 5. An example to illustrate some feature sequences of a PH patient and a healthy subject. The PH patient was a 76-year-old female of 158 cm in height and 64 kg in weight. Her pulmonary systolic blood pressure was 31 mmHg. The healthy subject was a 67-year-old male of 165 cm in height and 63 kg in weight. (a1,a2) Sequence of Int_s1 in ms; (b1,b2) sequence of Int_s2 in ms; (c1,c2) sequence of Max_pow_s2; (d1,d2) sequence of Max_f_s2 in Hz; (e1,e2) sequence of Ener_s1; and (f1,f2) sequence of Ener_s2.
Entropy 20 00389 g005
Figure 6. Estimated pdf pairs. (a1,a2) The two pdf pairs of Ener_s1; (a3) the pdf pairs of Fuzzy entropy of cardiac cycle; (b1b3) the three pdf pairs of ShanEner_s2; (c1c3) the three pdf pairs of Max_f_s1.
Figure 6. Estimated pdf pairs. (a1,a2) The two pdf pairs of Ener_s1; (a3) the pdf pairs of Fuzzy entropy of cardiac cycle; (b1b3) the three pdf pairs of ShanEner_s2; (c1c3) the three pdf pairs of Max_f_s1.
Entropy 20 00389 g006
Figure 7. ROC curve of a single entropy measure. (a1,a2) the ROC curves of Ener_s1; (a3) the ROC curve of cardiac cycle; (b1b3) the ROC curves of ShanEner_s2; (c1c3) the ROC curves of Max_f_s1.
Figure 7. ROC curve of a single entropy measure. (a1,a2) the ROC curves of Ener_s1; (a3) the ROC curve of cardiac cycle; (b1b3) the ROC curves of ShanEner_s2; (c1c3) the ROC curves of Max_f_s1.
Entropy 20 00389 g007
Figure 8. Contour plot of the joint pdf of two entropy measures. (af) The joint pdfs of the best six combinations; (gi) the joint pdfs of the worst three combinations.
Figure 8. Contour plot of the joint pdf of two entropy measures. (af) The joint pdfs of the best six combinations; (gi) the joint pdfs of the worst three combinations.
Entropy 20 00389 g008
Table 1. Basic information of the PH patient group.
Table 1. Basic information of the PH patient group.
ItemValueRange
Number (M/F)50 (26/24)-
Age (year)69.4 ± 12.333–89
Height (cm)164.0 ± 8.4148–177
Weight (kg)64.5 ± 12.632–90
BMI (kg/m2)23.9 ± 4.314.2–35.3
PSBP (mmHg)38.4 ± 11.820.4–68.0
Note: values are expressed as number (male/female) or mean ± standard deviation. BMI: body mass index; PSBP: pulmonary systolic blood pressure.
Table 2. Basic information of the healthy control group.
Table 2. Basic information of the healthy control group.
ItemValueRange
Number (M/F)54 (47/7)-
Age (year)32.6 ± 14.922–67
Height (cm)172.5 ± 7.0155–184
Weight (kg)64.3 ± 7.843–76
BMI (kg/m2)21.6 ± 2.317.6–26.9
PSBP (mmHg)<25-
Note: values are expressed as number (male/female) or mean ± standard deviation. BMI: body mass index; PSBP: pulmonary systolic blood pressure.
Table 3. Summary of the features.
Table 3. Summary of the features.
No.NamePhysical Meaning
1Int_s1Time interval of S1
2Int_s2Time interval of S2
3Car_cycleCardiac cycle
4Max_pow_s1Maximum magnitude of the power spectral density of S1
5Max_f_s1The frequency value corresponding to “Max_pow_s1”
6Max_pow_s2Maximum magnitude of the power spectral density of S2
7Max_f_s2The frequency value corresponding to “Max_pow_s2”
8Ener_s1Average energy of S1
9Ener_s2Average energy of S2
10ShanEner_s1Average Shannon energy of S1
11ShanEner_s2Average Shannon energy of S2
Table 4. Identification algorithm using a single entropy measure.
Table 4. Identification algorithm using a single entropy measure.
Algorithm 1: identification using a single entropy measure
 Let f ^ p ( e ) be the pdf of an entropy measure of the PH patient group and f ^ h ( e ) be the pdf of the entropy measure of the health control group. The entropy measure of an unknown subject is e u .
If f ^ p ( e u ) > f ^ h ( e u ) then
the unknown subject is judged as a PH patient
else
the unknown subject is judged as a healthy subject
Table 5. Identification algorithm using joint entropy measures.
Table 5. Identification algorithm using joint entropy measures.
Algorithm 2: identification using joint entropy measures
 Let f p d ^ ( e ) be the pdf of joint entropy measures of the PH patient group and f h d ^ ( e ) be the joint pdf of the entropy measures of the health control group. The entropy measure vector of an unknown subject is e u .
If f p d ^ ( e u ) > f h d ^ ( e u ) then
the unknown subject is judged as a PH patient
else
the unknown subject is judged as a healthy subject
Table 6. List of the significance levels and correlation coefficients with age of the thirty three measures.
Table 6. List of the significance levels and correlation coefficients with age of the thirty three measures.
No.Entropy Measurep ValueCC.No.Entropy Measurep ValueCC.
1SampEn_Ener_s11.49 × 10−5−0.3018SampEn_ShanEner_s19.79 × 10−7−0.37
2FuzzyEn_Ener_s12.12 × 10−5−0.2919FuzzyEn_ShanEner_s13.67 × 10−6−0.35
3FMEn_Ener_s16.42 × 10−6−0.3320FMEn_ShanEner_s11.60 × 10−6−0.38
4SampEn_ShanEner_s25.88 × 10−5−0.2821FMEn_max_f_s21.52 × 10−60.41
5FuzzyEn_ShanEner_s26.57 × 10−5−0.2722SampEn_Ener_s21.11 × 10−5−0.31
6FMEn_ShanEner_s21.29 × 10−4−0.2623FuzzyEn_Ener_s21.76 × 10−5−0.31
7SampEn_max_f_s17.25 × 10−40.2124FMEn_Ener_s23.36 × 10−5−0.31
8FuzzyEn_max_f_s13.52 × 10−30.1425SampEn_max_pow_s11.58 × 10−5−0.40
9FMEn_max_f_s16.47 × 10−30.1626FuzzyEn__max_pow_s11.14 × 10−5−0.40
10SampEn_Car_cycle4.90 × 10−3−0.3127FMEn_max_pow_s13.91 × 10−6−0.40
11FuzzyEn_Car_cycle3.99 × 10−3−0.3028SampEn_Int_s25.38 × 10−40.39
12FMEn_Car_cycle1.89 × 10−3−0.3329FuzzyEn_Int_s25.06 × 10−40.42
13SampEn_max_pow_s21.20 × 10−9−0.4130FMEn_Int_s29.26 × 10−30.37
14FuzzyEn__max_pow_s22.69 × 10−9−0.3931SampEn_Int_s14.22 × 10−10.21
15FMEn_max_pow_s26.13 × 10−9−0.3932FuzzyEn_Int_s12.35 × 10−10.22
16SampEn_max_f_s28.02 × 10−70.4333FMEn_Int_s13.92 × 10−10.19
17FuzzyEn_max_f_s24.27 × 10−50.39
Note: the bold format represents the selected entropy measures.
Table 7. Identification performance of a single entropy measure based on leave-one-out cross-validation.
Table 7. Identification performance of a single entropy measure based on leave-one-out cross-validation.
No.Entropy MeasureSen.Spe.Acc.AUCCorresponding pdf Pair and ROC Curve
1SampEn_Ener_s10.7200.6480.6830.720Figure 6a1 and Figure 7a1
2FuzzyEn_Ener_s10.6800.6480.6630.714Figure 6a2 and Figure 7a2
3FuzzyEn_Car_cycle0.6000.8520.7310.709Figure 6a3 and Figure 7a3
4SampEn_ShanEner_s20.5800.7960.6920.667Figure 6b1 and Figure 7b1
5FuzzyEn_ShanEner_s20.4800.7780.6350.681Figure 6b2 and Figure 7b2
6FMEn_ShanEner_s20.5000.7960.6540.670Figure 6b3 and Figure 7b3
7SampEn_Max_f_s10.5400.7590.6540.629Figure 6c1 and Figure 7c1
8FuzzyEn_Max_f_s10.6600.6480.6540.646Figure 6c2 and Figure 7c2
9FMEn_Max_f_s10.5800.6670.6250.584Figure 6c3 and Figure 7c3
Sen.: sensitivity; Spe: specificity; Acc.: accuracy.
Table 8. Identification performance of joint pdf of two entropy measures based on leave-one-out cross-validation.
Table 8. Identification performance of joint pdf of two entropy measures based on leave-one-out cross-validation.
No.Joint Two Entropy MeasuresSen.Spe.Acc.AUCCorresponding pdfs
1SampEn_Ener_s1/SampEn_ShanEner_s20.6800.7960.7400.770Figure 8a
2SampEn_Ener_s1/FuzzyEn_ShanEner_s20.6800.7960.7400.759Figure 8b
3SampEn_Ener_s1/FMEn_ShanEner_s20.6400.7780.7120.756Figure 8c
4FuzzyEn_Ener_s1/SampEn_ShanEner_s20.6400.7780.7120.759Figure 8d
5FuzzyEn_Ener_s1/FuzzyEn_ShanEner_s20.6400.7590.7020.748Figure 8e
6FuzzyEn_Ener_s1/FMEn_ShanEner_s20.6200.7780.7020.741Figure 8f
7FMEn_ShanEner_s2/SampEn_Max_f_s10.5400.7040.6250.680Figure 8g
8FMEn_ShanEner_s2/FuzzyEn_Max_f_s10.5800.6480.6150.680Figure 8h
9FMEn_ShanEner_s2/FMEn_Max_f_s10.5200.7220.6250.666Figure 8i
Table 9. Top ten identification performance of joint pdf of multiple entropy measures based on leave-one-out cross-validation.
Table 9. Top ten identification performance of joint pdf of multiple entropy measures based on leave-one-out cross-validation.
No.Joint Entropy MeasuresNumber of Joint MeasuresSen.Spe.Acc.AUC
1SampEn_Ener_s1/FuzzyEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Car_cycle/FMEn_Max_f_s150.7400.8700.8080.829
2SampEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Max_f_s1/FuzzyEn_Car_cycle40.7400.8700.8080.814
3SampEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Car_cycle/FMEn_Max_f_s140.7400.8700.8080.813
4SampEn_Ener_s1/FuzzyEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Car_cycle40.7200.8700.7980.839
5SampEn_Ener_s1/FuzzyEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Max_f_s1/FuzzyEn_Car_cycle/FMEn_Max_f_s160.7400.8520.7980.810
6SampEn_Ener_s1/FuzzyEn_Car_cycle/FMEn_Max_f_s130.7200.8700.7980.798
7SampEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Car_cycle30.7200.8520.7880.821
8SampEn_Ener_s1/FuzzyEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Max_f_s1/FuzzyEn_Car_cycle50.7600.8150.7880.818
9SampEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Max_f_s1/FuzzyEn_Car_cycle/FMEn_Max_f_s150.7200.8330.7790.801
10FuzzyEn_Ener_s1/SampEn_Max_f_s1/FuzzyEn_Car_cycle30.6800.8520.7690.815

Share and Cite

MDPI and ACS Style

Tang, H.; Jiang, Y.; Li, T.; Wang, X. Identification of Pulmonary Hypertension Using Entropy Measure Analysis of Heart Sound Signal. Entropy 2018, 20, 389. https://doi.org/10.3390/e20050389

AMA Style

Tang H, Jiang Y, Li T, Wang X. Identification of Pulmonary Hypertension Using Entropy Measure Analysis of Heart Sound Signal. Entropy. 2018; 20(5):389. https://doi.org/10.3390/e20050389

Chicago/Turabian Style

Tang, Hong, Yuanlin Jiang, Ting Li, and Xinpei Wang. 2018. "Identification of Pulmonary Hypertension Using Entropy Measure Analysis of Heart Sound Signal" Entropy 20, no. 5: 389. https://doi.org/10.3390/e20050389

APA Style

Tang, H., Jiang, Y., Li, T., & Wang, X. (2018). Identification of Pulmonary Hypertension Using Entropy Measure Analysis of Heart Sound Signal. Entropy, 20(5), 389. https://doi.org/10.3390/e20050389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop