Next Article in Journal
Substrate Effect on Catalytic Loop and Global Dynamics of Triosephosphate Isomerase
Previous Article in Journal
Hawking and Unruh Effects of a 5-Dimensional Minimal Gauged Supergravity Black Hole by a Global Embedding Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Time Series Analysis Using Composite Multiscale Entropy

1
Department of Mechatronic Technology, National Taiwan Normal University, Taipei 10610, Taiwan
2
Department of Communication, Navigation and Control Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan
3
Mechanical and Systems Research Laboratories, Industrial Technology Research Institute, Hsinchu 31040, Taiwan
4
Department of Engineering Science and Ocean Engineering, National Taiwan University, Taipei 10617, Taiwan
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(3), 1069-1084; https://doi.org/10.3390/e15031069
Submission received: 4 February 2013 / Revised: 25 February 2013 / Accepted: 13 March 2013 / Published: 18 March 2013

Abstract

:
Multiscale entropy (MSE) was recently developed to evaluate the complexity of time series over different time scales. Although the MSE algorithm has been successfully applied in a number of different fields, it encounters a problem in that the statistical reliability of the sample entropy (SampEn) of a coarse-grained series is reduced as a time scale factor is increased. Therefore, in this paper, the concept of a composite multiscale entropy (CMSE) is introduced to overcome this difficulty. Simulation results on both white noise and 1/f noise show that the CMSE provides higher entropy reliablity than the MSE approach for large time scale factors. On real data analysis, both the MSE and CMSE are applied to extract features from fault bearing vibration signals. Experimental results demonstrate that the proposed CMSE-based feature extractor provides higher separability than the MSE-based feature extractor.

Graphical Abstract

1. Introduction

Quantifying the amount of regularity for a time series is an essential task in understanding the behavior of a system. One of the most popular regularity measurements for a time series is the sample entropy (SampEn) [1] which is an unbiased estimator of the conditional probability that two similar sequences of m consecutive data points (m is the embedded dimension) will remain similar when one more consecutive point is included [2]. The SampEn characterizes complexity strictly on a time scale defined by the sampling procedure which is used to obtain the time series under evaluation. However, the long-term structures in the time series cannot be captured by SampEn. In regard to this disadvantage, Costa proposed the multiscale entropy (MSE) algorithm [3], which uses sample entropies (SampEns) of a time series at multiple scales to tackle this problem. The MSE has been successfully applied to different research fields in the past decades. These applications include the analyses of the human gait dynamics [2], heart rate variability [3,4], electroencephalogram [5], postural control [6], vibration of rotary machine [7,8], rainfall time series [9], time series of river flow [10], electroseismic time series [11], time series of traffic flow [12], social dynamics [13], chatter in the milling process [14], and vibrations of a vehicle [15], etc.. These works demonstrate the effectiveness of the MSE algorithm for the analysis of the complex time series.
The conventional MSE algorithm consists of two steps: (1) a coarse-graining procedure is used to derive the representations of a system’s dynamics at different time scales; (2) the SampEn algorithm is used to quantify the regularity of a coarse-grained time series at each time scale factor. To obtain a reasonable entropy value by using SampEn, the time series length is suggested to be in the range of 10m to 30m [16]. As reported in [2,5], in case of m = 2, the SampEn is significantly independent of the time series length when the number of data points is larger than 750. However, for a shorter time series, the variance of the entropy estimator grows very fast as the number of data points is reduced. In the MSE algorithm, for an N points time series, the length of the coarse-grained time series at a scale factor τ is equal to N /τ. The larger the scale factor is, the shorter the coarse-grained time series is. Therefore, the variance of the entropy of the coarse-grained series estimated by SampEn increases as a time scale factor increases. In many practical applications, the data length is often very short and the variance of estimated entropy values at large scale factors would become large. Large variance of estimated entropy values leads to the reduction of reliability in distinguishing time series generated by different systems. In order to reduce the variance of estimated entropy values at large scales, a composite multiscale entropy (CMSE) algorithm is proposed in this paper. The effectiveness of the CMSE algorithm is evaluated through two synthetic noise signals and a real vibration data set provided by Case Western Reserve University (CWRU) [17].

2. Methods

2.1. Multiscale Entropy

Essentially, the MSE is used to compute the corresponding SampEn over a sequence of scale factors. For an one-dimensional time series, x = { x 1 , x 2 , ... , x N } , the coarse-grained time series, y ( τ ) , can be constructed at a scale factor of τ, according to the following equation [3]:
y j ( τ ) = 1 τ i = ( j 1 ) τ + 1 j τ x i ,     1 j N τ
As shown in Figure 1, the coarse-grained time series is divided into non-overlapping windows of length τ, and the data points inside each window are averaged. We then define the entropy measurement of each coarse-grained time series as the MSE value. In this paper, the SampEn [1] is used as the entropy measurement. The algorithm proposed in [18] is repeated here, and we refer to the algorithm as shown in Figure 2. In the whole study of this paper, we calculate MSE values from scale 1 to scale 20 (τ = 1 to 20), and the sample entropy of each coarse grained time series is calculated with m = 2 and r = 0.15σ [2], where σ denotes the standard deviation (SD) of the original time series.
Figure 1. Schematic illustration of the coarse-grained procedure. Modified from reference [3].
Figure 1. Schematic illustration of the coarse-grained procedure. Modified from reference [3].
Entropy 15 01069 g001
Figure 2. Brute force method. Modified from reference [18].
Figure 2. Brute force method. Modified from reference [18].
Entropy 15 01069 g002
Most of the entropy measurements are dependent on the length of time-series. Since the length of each coarse-grained time series is equal to that of the original time series divided by the scale factor, τ, the variance of entropy measurements grows as the length of coarse-grained time series is reduced. The estimation error of a conventional MSE algorithm would be very large at large scale factors. In the following section, the modified MSE algorithm, named composite multiscale entropy (CMSE), is proposed to overcome this drawback.

2.2. Composite Multiscale Entropy

As shown in Figure 3, there are two and three coarse-grained time series divided from the original time series for scale factors of 2 and 3 respectively. The kth coarse-grained time series for a scale factor of τ, y k ( τ ) = { y k , 1 ( τ ) y k , 2 ( τ ) y k , p ( τ ) } is defined as
y k , j ( τ ) = 1 τ i = ( j 1 ) τ + k j τ + k 1 x i ,     1 j N τ , 1 k τ
Figure 3. Schematic illustration of the CMSE procedure.
Figure 3. Schematic illustration of the CMSE procedure.
Entropy 15 01069 g003
In the conventional MSE algorithm, for each scale, the MSE is computed by only using the first coarse-grained time series, y 1 ( τ ) :
M S E ( x , τ , m , r ) = S a m p E n ( y 1 ( τ ) , m , r )
In the CMSE algorithm, at a scale factor of τ, the sample entropies of all coarse-grained time series are calculated and the CMSE value is defined as the means of τ entropy values. That is:
C M S E ( x , τ , m , r ) = 1 τ k = 1 τ S a m p E n ( y k ( τ ) , m , r )
Figure 4 shows the flow charts of the MSE and CMSE algorithms for comparison. The Matlab code of CMSE is shown in Appendix A.
Figure 4. Flow charts of MSE and CMSE algorithms.
Figure 4. Flow charts of MSE and CMSE algorithms.
Entropy 15 01069 g004

3. Comparative Study of MSE and CMSE

To evaluate the effectiveness of the CMSE, two synthetic noise signals, white and 1/f noises, and a real vibration data set were applied in comparison with that of the MSE in this section.

3.1. White Noise and 1/f Noise

The SampEn for coarse-grained white noise time series is computed by [4]:
S a m p E n = ln { 1 2 σ τ 2 π [ e r f ( x + r σ 2 / τ ) e r f ( x r σ 2 / τ ) ] exp ( x 2 τ 2 σ 2 ) d x }
where erf refers to the error function [4]. For 1/f noise, the analytic value of SampEn is 1.8 with N = 30,000 in all scales [4]. In order to further investigate the effect of different data lengths on the MSE and CMSE, we first test the MSE on simulated white noises with different data lengths. As shown in Figure 5a, for short time series, the estimated MSE values are significantly different from the analytic solutions. This significant error may reduce the reliability in distinguishing time series generated by different systems. We then applied the MSE on simulated 1/f noises with different data lengths. As shown in Figure 5b, the variance of the entropy estimator increases with the reduced data lengths and difference between analytic solutions and numerical solutions existing in all scales. Figure 6a,b show the entropies of white noise and 1/f noise by applying CMSE, respectively. In comparison with the estimation by the MSE, the variance of the entropy estimator can be improved by the CMSE evidently. However, the over estimation due to the shortage of the data length still exists when CMSE is applied to the 1/f noise.
Figure 5. MSE results of (a) white noise and (b) 1/f noise with different data lengths.
Figure 5. MSE results of (a) white noise and (b) 1/f noise with different data lengths.
Entropy 15 01069 g005
Figure 6. CMSE results of (a) white noise and (b) 1/f noise with different data lengths.
Figure 6. CMSE results of (a) white noise and (b) 1/f noise with different data lengths.
Entropy 15 01069 g006
The numerical results of white noise with two different data lengths (N = 2,000 and 10,000) are shown in Figure 7. The error bar at each scale indicates the SD of an entropy value which calculated 100 independent noise signals. For a scale factor of one, the MSE value is equal to the CMSE value because the coarse-grained time series is the same as the original time series. In all cases, the means of the entropy values have no significant difference between the MSE and CMSE. However, the SDs of the entropy values between the MSE and CMSE are different. For a longer length of white noise (N = 10,000, Figure 7b), the SD of the CMSE is slightly less than that of the MSE. For a shorter length of white noise (N = 2,000, Figure 7a), at a large scale, the SD of the CMSE can be reduced greatly. For instance, in the case of white noise with 2,000 data points, the SD of the MSE at a scale factor of 20 is 0.1033 while the SD of the CMSE is only 0.0658. Figure 8a,b show the results of the MSE and CMSE applied to the 1/f noise with 2,000 and 10,000 data points, respectively. The result of 1/f noise is similar to that of white noise; the CMSE can reduce the SDs of estimations.
Figure 7. MSE and CMSE results of white noise with data lengths (a) N = 2,000 and (b) N = 10,000.
Figure 7. MSE and CMSE results of white noise with data lengths (a) N = 2,000 and (b) N = 10,000.
Entropy 15 01069 g007
Figure 8. MSE and CMSE results of 1/f noise with data lengths (a) N = 2,000 and (b) N = 10,000
Figure 8. MSE and CMSE results of 1/f noise with data lengths (a) N = 2,000 and (b) N = 10,000
Entropy 15 01069 g008
Table 1 summarizes the SDs of the MSE and CMSE at different time scales. These results indicate that the entropy values calculated by the conventional MSE and CMSE algorithms are almost the same, but the CMSE can estimate entropy values more accurate than the MSE. This improvement is significant when the CMSE is utilized to analyze the time series with short data length.
Table 1. Standard deviations of the MSE and CMSE at different time scales.
Table 1. Standard deviations of the MSE and CMSE at different time scales.
Data LengthSignalsMethodsScales
13579111315171920
2,000white
noise
MSE0.0260.0490.0590.0640.0720.0670.0840.0760.0910.0910.103
CMSE0.0260.0300.0370.0430.0460.0510.0540.0540.0570.0630.066
1/f noiseMSE0.0840.0970.1210.1390.1620.2270.2360.2840.2650.2820.310
CMSE0.0840.0880.0920.0990.1080.1260.1220.1480.1530.1550.163
10,000white
noise
MSE0.0070.0140.0190.0250.0280.0300.0320.0350.0380.0350.035
CMSE0.0070.0100.0150.0180.0200.0210.0230.0250.0250.0270.028
1/f noiseMSE0.0690.0690.0700.0730.0720.0710.0780.0740.0860.0850.080
CMSE0.0690.0680.0680.0670.0690.0690.0680.0690.0720.0720.070

3.2. Real Vibration Data

In order to validate the utility of the CMSE algorithm for real data, experimental analysis on bearing faults is carried out. All the bearing fault data used in this paper are obtained from the Case Western Reserve University (CWRU) Bearing Data Center [17]. The test stand is composed of a 2-horsepower motor and a dynamometer, which are connected by a torque transducer. The test bearings using electro-discharge machining with fault diameters of 7, 14, and 21 mils (1 mil is one thousandth of an inch) are used to detect single point faults. Bearing conditions of the experiments include normal states, ball faults, inner race faults and outer race faults located at 3, 6, and 12 o’clock positions which are at 0°, 270°, and 90° on the front section diagram of the bearing, respectively. In other words, from the cross section diagram of the bearing, the 3 o’clock position is parallel to the direction of the load zone; 6 and 12 o’clock positions are perpendicular to the load zone. Vibration data are collected by accelerometers which are placed at the 12 o’clock position at both the drive end and fan end of the motor housing. Digital data are collected at a sampling rate of 48,000 samples per second for drive end bearing experiments. The motor speeds controlled by motor load are set to be 1,730, 1,750, and 1,772 rpm.
In the experiments, the vibration signals were divided into several non-overlapping segments with a specified data length, N = 2,000. Each non-overlapping segment was regarded as one sample in the validation process. The numbers of samples for each bearing condition are listed in Table 2. Each sample is a time series with 2,000 data points. We then calculated the MSE and CMSE values up to scale 20 for each sample. Therefore, the dimension of sample in the feature space is 20 in the following experiments. Partial measured acceleration signals of vibrations at the six different conditions are shown in Figure 9. The MSE and CMSE of bearing data in specific condition is shown in Figure 10. For each condition, the means of the entropy estimator obtained by the CMSE are very similar to those obtained by the MSE, while less SDs are achieved by the CMSE. This consists with the analysis results of synthetic noise signals. The collected data in Figure 10 are dependent on the neighbor states sampled in the experiments.
Table 2. Numbers of data sets are corresponding to different faulted classes, defective levels and rotation speeds.
Table 2. Numbers of data sets are corresponding to different faulted classes, defective levels and rotation speeds.
Shaft Speed / Defect LevelRotation Speed (rpm)
173017501772
Fault diameter (mils)
Fault Class714217142171421
Normal state243242242
Ball244243243243243243243243243
Inner race fault243242244243244245243191242
Outer race fault (3)243 242243 243242 245
Outer race fault (6)244244244243243244243242244
Outer race fault (12)242 243241 243241 243
Figure 9. Measured acceleration signals of vibrations in the time domain of six different bearing conditions (a) normal state, ball fault and inner race fault; (b) outer race faults at 3, 6, and 12 o’clock positions.
Figure 9. Measured acceleration signals of vibrations in the time domain of six different bearing conditions (a) normal state, ball fault and inner race fault; (b) outer race faults at 3, 6, and 12 o’clock positions.
Entropy 15 01069 g009
Figure 10. MSE and CMSE results on bearing vibration data (1,730 rpm, 7 mils). (a) Normal state. (b) Ball fault. (c) Inner race fault. (d) Outer race fault (3 o’clock position). (e) Outer race fault (6 o’clock position). (f) Outer race fault (12 o’clock position)
Figure 10. MSE and CMSE results on bearing vibration data (1,730 rpm, 7 mils). (a) Normal state. (b) Ball fault. (c) Inner race fault. (d) Outer race fault (3 o’clock position). (e) Outer race fault (6 o’clock position). (f) Outer race fault (12 o’clock position)
Entropy 15 01069 g010

3.3. Performance Assessment

In this subsection, we first use Mahalanobis distance to assess the effectiveness of the MSE and CMSE methods. Mahalanobis distance [19] is a popular method to measure the separation of two groups of samples. Assuming two groups with mean vectors s ¯ 1 and s ¯ 2 , Mahalanobis distance is shown as the following equation [19]:
d = ( s ¯ 1 s ¯ 2 ) T C 1 ( s ¯ 1 s ¯ 2 )
The pooled variance-covariance matrix C in equation (6) is shown below [19]:
C = 1 n 1 + n 2 2 ( n 1 C 1 + n 2 C 2 )
where n i is the number of samples of group i and C i is the covariance matrix of group i.
Table 3 shows Mahalanobis distances for six different distinguishing conditions. The motor speed was set at 1,730 rpm and the fault diameter was 7 mils. The faults are normal state (N), ball fault (B), inner race fault (I), outer race defects at 3, 6, and 12 o’clock positions (O3, O6, and O12). Larger Mahalanobis distance in the table represents the higher level of linear separability for two different groups [19]. Comparing N with B (N in fault class 1 and B in fault class 2 or N in fault class 2 and B in fault class 1), both values of the CMSE and MSE are high and the value of the CMSE is higher than that of MSE, indicating that the normal state can easily be distinguished from ball fault and the CMSE has the higher distinguishability. Therefore, it is obvious that the normal state can easily be distinguished from ball fault, inner race fault, and outer race faults located at 3 and 6 o’clock positions, but not easily distinguished from the outer race fault located at 12 o’clock position due to the smaller value in Table 3. The most indistinguishable conditions are (1) between ball fault and inner race fault and (2) between outer race faults located at 3 and 6 o’clock positions. In addition, in all cases, Mahalanobis distance of two different groups of features extracted by the CMSE algorithm is larger than that extracted by the MSE algorithm. Therefore, compared with the MSE, the CMSE as a feature extractor can have the higher distinguishability.

3.4. Fault Diagnosis Using an Artificial Neural Network

We built a fault diagnosis system based on a neural network and used the CMSE as a feature extractor contrasting with MSE. The aforementioned quantities in twenty scales were selected as the features for bearing fault diagnosis. The training of a neural network with bearing vibration data was performed by the MATLAB Neural Networks Toolbox V6.0.2. A three-layer backpropagation neural network was trained by the Levenberg–Marquardt algorithm [20]. The network had 20 nodes in the input and visible layers (each node corresponding to a scale of MSE and CMSE), 30 nodes in the hidden layer, and 4 or 6 nodes in the output layer dependence on how many fault conditions were considered. For training, a target mean square error of 0, a learning rate of 0.001, a minimum gradient of 10−10 and maximum iteration number of 1,000 were used. To improve generalization, the data sets were randomly divided by three parts: (1) training (50%), (2) validation (15%), and (3) testing (35%). The average accuracy of prediction for each experiment was quantified over 200 tests.
Table 3. Mahalanobis distances for six different distinguishing conditions.
Table 3. Mahalanobis distances for six different distinguishing conditions.
Fault Class 1Feature ExtractorFault class 2
NBIO3O6O12
NMSE 25.20025.20020.04425.4845.538
CMSE25.89828.06921.61426.1156.992
BMSE25.200 3.6755.3125.5159.657
CMSE25.8985.8528.1057.53411.387
IMSE25.2003.675 7.1287.56012.883
CMSE28.0695.8528.6729.65216.965
O3MSE20.0445.3127.128 5.9349.303
CMSE21.6148.1058.6726.60513.239
O6MSE25.4845.5157.5605.934 9.624
CMSE26.1157.5349.6526.60512.139
O12MSE5.5389.65712.8839.3039.624
CMSE6.99211.38716.96513.23912.139
In this paper, we conducted nine experiments with a single operation speed and a single fault diameter. Table 4 lists the diagnostic accuracy results using the MSE and CMSE of each experiment. The experiments with the fault diameter of 21 mils have lower diagnosis accuracy than others. However, in these indistinguishable cases, the improvement by using the CMSE as a feature extractor is more obvious. Therefore, it can be inferred that the accuracy of bearing fault diagnosis can be enhanced by the CMSE. Furthermore, although CMSE is only applied to the univariate time series, it also can be applied to multivariate time series [21]. The proposed CMSE algorithm is for SampEn in this research. It is also for permutation entropy while the multiscale analysis is performed [22,23].
Table 4. Diagnostic accuracy results using MSE and CMSE.
Table 4. Diagnostic accuracy results using MSE and CMSE.
Fault Diameter / Feature ExtractorRotation Speed (rpm)
173017501772
714217142171421
MSE97.33%98.81%96.77%99.05%98.01%95.65%99.34%96.67%95.89%
CMSE99.29%99.75%98.26%99.58%99.86%98.50%99.91%99.65%98.42%

5. Conclusions

In this paper, the concept of CMSE is introduced for the analysis of the complexity of a time series. The proposed method presents better performance on short time series than the MSE. For the analysis of white noise and 1/f noise, simulation results show that the CMSE provides a more reliable estimation of entropy than the MSE. In addition, for the CMSE as the feature extractor of the bearing fault diagnosis system and the Mahalanobis distance used as a performance assessment, the simulation results show that the CMSE can enhance the linear distinguishability in comparison with the MSE. Experimental results also demonstrate that the proposed CMSE provides a higher accuracy of bearing fault diagnosis.

Acknowledgments

This work was supported by the National Science Council, R.O.C., under Grant NSC 101-2221-E-003-013.

References

  1. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circul. Physiol. 2000, 278, H2039–H2049. [Google Scholar]
  2. Costa, M.; Peng, C.K.; Goldberger, A.L.; Hausdorff, J.M. Multiscale entropy analysis of human gait dynamics. Physica A 2003, 330, 53–60. [Google Scholar] [CrossRef]
  3. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of complex physiologic time series. Phys. Rev. Lett. 2002, 89, 68102. [Google Scholar] [CrossRef]
  4. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of biological signals. Phys. Rev. E Stat. Nonlin Soft Matter Phys. 2005, 71, 021906. [Google Scholar] [CrossRef] [PubMed]
  5. Escudero, J.; Abásolo, D.; Hornero, R.; Espino, P.; López, M. Analysis of electroencephalograms in Alzheimerʼs disease patients with multiscale entropy. Physiol. Meas. 2006, 27, 1091. [Google Scholar] [CrossRef] [PubMed]
  6. Manor, B.; Costa, M.D.; Hu, K.; Newton, E.; Starobinets, O.; Kang, H.G.; Peng, C.; Novak, V.; Lipsitz, L.A. Physiological complexity and system adaptability: evidence from postural control dynamics of older adults. J. Appl. Physiol. 2010, 109, 1786–1791. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, L.; Xiong, G.; Liu, H.; Zou, H.; Guo, W. Bearing fault diagnosis using multi-scale entropy and adaptive neuro-fuzzy inference. Expert Syst. Appl. 2010, 37, 6077–6085. [Google Scholar] [CrossRef]
  8. Lin, J.L.; Liu, J.Y.C.; Li, C.W.; Tsai, L.F.; Chung, H.Y. Motor shaft misalignment detection using multiscale entropy with wavelet denoising. Expert Syst. Appl. 2010, 37, 7200–7204. [Google Scholar] [CrossRef]
  9. Chou, C.M. Wavelet-based multi-scale entropy analysis of complex rainfall time series. Entropy 2011, 13, 241–253. [Google Scholar] [CrossRef]
  10. Li, Z.; Zhang, Y.K. Multi-scale entropy analysis of Mississippi River flow. Stoch. Environ. Res. Risk Assess. 2008, 22, 507–512. [Google Scholar] [CrossRef]
  11. Guzmán-Vargas, L.; Ramírez-Rojas, A.; Angulo-Brown, F. Multiscale entropy analysis of electroseismic time series. Nat. Hazards Earth Syst. Sci. 2008, 8, 855–860. [Google Scholar] [CrossRef]
  12. Yan, R.-Y.; Zheng, Q.-H. Multi-Scale Entropy Based Traffic Analysis and Anomaly Detection. In Proceeding of Eighth International Conference on Intelligent Systems Design and Applications, ISDA 2008, Kaohsiung, Taiwan, 26–28 November, 2008; Volume 3, pp. 151–157.
  13. Glowinski, D.; Coletta, P.; Volpe, G.; Camurri, A.; Chiorri, C.; Schenone, A. Multi-Scale Entropy Analysis of Dominance in Social Creative Activities. In proceeding of the 18th International Conference on Multimedea, Firenze, Italy, 25–29 October, 2010; Association for Computing Machinary: New York, NY, USA, 2010; pp. 1035–1038. [Google Scholar]
  14. Litak, G.; Syta, A.; Rusinek, R. Dynamical changes during composite milling: recurrence and multiscale entropy analysis. Int. J. Adv. Manuf. Technol. 2011, 56, 445–453. [Google Scholar] [CrossRef]
  15. Borowiec, M.; Sen, A.K.; Litak, G.; Hunicz, J.; Koszałka, G.; Niewczas, A. Vibrations of a vehicle excited by real road profiles. Forsch. Ing.Wes. (Eng. Res.) 2010, 74, 99–109. [Google Scholar] [CrossRef]
  16. Liu, Q.; Wei, Q.; Fan, S.Z.; Lu, C.W.; Lin, T.Y.; Abbod, M.F.; Shieh, J.S. Adaptive Computation of Multiscale Entropy and Its Application in EEG Signals for Monitoring Depth of Anesthesia During Surgery. Entropy 2012, 14, 978–992. [Google Scholar] [CrossRef]
  17. Case Western Reserve University Bearing Data Center Website. Available online: http://csegroups.case.edu/bearingdatacenter/pages/download-data-file/ (accessed on 5 May 2011).
  18. Pan, Y.H.; Lin, W.Y.; Wang, Y.H.; Lee, K.T. Computing multiscale entropy with orthogonal range search. J. Mar. Sci. Technol. 2011, 19, 107–113. [Google Scholar]
  19. Crossman, J.A.; Guo, H.; Murphey, Y.L.; Cardillo, J. Automotive signal fault diagnostics—Part I: signal fault analysis, signal segmentation, feature extraction and quasi-optimal feature selection. IEEE Trans. Veh. Technol. 2003, 52, 1063–1075. [Google Scholar] [CrossRef]
  20. Hagan, M.T.; Menhaj, M.B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 1994, 5, 989–993. [Google Scholar] [CrossRef] [PubMed]
  21. Wei, Q.; Liu, D.H.; Wang, K.H.; Liu, Q.; Abbod, M.F.; Jiang, B.C.; Chen, K.P.; Wu, C.; Shieh, J.S. Multivariate multiscale entropy applied to center of pressure signals analysis: an effect of vibration stimulation of shoes. Entropy 2012, 14, 2157–2172. [Google Scholar] [CrossRef]
  22. Wu, S.D.; Wu, P.H.; Wu, C.W.; Ding, J.J.; Wang, C.C. Bearing fault diagnosis based on multiscale permutation entropy and support vector machine. Entropy 2012, 14, 1343–1356. [Google Scholar] [CrossRef]
  23. Morabito, F.C.; Labate, D.; La Foresta, F.; Bramanti, A.; Morabito, G.; Palamara, I. Multivariate multi-scale permutation entropy for complexity analysis of Alzheimer’s disease EEG. Entropy 2012, 14, 1186–1202. [Google Scholar] [CrossRef]

Appendix A. The Matlab Code for the Composite Multiscale Entropy Algorithm

function E = CMSE(data,scale)
r = 0.15*std(data);
for i = 1:scale   % i:scale index
    for j = 1:i   % j:croasegrain series index
        buf = croasegrain(data(j:end),i);
        E(i) = E(i)+ SampEn(buf,r)/i;
    end
end
 
%Coarse Grain Procedure. See Equation (2)
% iSig: input signal ; s : scale numbers ; oSig: output signal
function oSig=CoarseGrain(iSig,s)
N=length(iSig); %length of input signal
for i=1:1:N/s
    oSig(i)=mean(iSig((i-1)*s+1:i*s));
end
 
%function to calculate sample entropy. See Algorithm 1
function entropy = SampEn(data,r)
l = length(data);
Nn = 0;
Nd = 0;
for i = 1:l-2
    for j = i+1:l-2
        if abs(data(i)-data(j))<r && abs(data(i+1)-data(j+1))<r
            Nn = Nn+1;
            if abs(data(i+2)-data(j+2))<r
                Nd = Nd+1;
            end
        end
    end
end
entropy = -log(Nd/Nn);

Share and Cite

MDPI and ACS Style

Wu, S.-D.; Wu, C.-W.; Lin, S.-G.; Wang, C.-C.; Lee, K.-Y. Time Series Analysis Using Composite Multiscale Entropy. Entropy 2013, 15, 1069-1084. https://doi.org/10.3390/e15031069

AMA Style

Wu S-D, Wu C-W, Lin S-G, Wang C-C, Lee K-Y. Time Series Analysis Using Composite Multiscale Entropy. Entropy. 2013; 15(3):1069-1084. https://doi.org/10.3390/e15031069

Chicago/Turabian Style

Wu, Shuen-De, Chiu-Wen Wu, Shiou-Gwo Lin, Chun-Chieh Wang, and Kung-Yen Lee. 2013. "Time Series Analysis Using Composite Multiscale Entropy" Entropy 15, no. 3: 1069-1084. https://doi.org/10.3390/e15031069

APA Style

Wu, S. -D., Wu, C. -W., Lin, S. -G., Wang, C. -C., & Lee, K. -Y. (2013). Time Series Analysis Using Composite Multiscale Entropy. Entropy, 15(3), 1069-1084. https://doi.org/10.3390/e15031069

Article Metrics

Back to TopTop