Next Article in Journal
Pseudomonas aeruginosa Response to Acidic Stress and Imipenem Resistance
Next Article in Special Issue
Principal Subspace of Dynamic Functional Connectivity for Diagnosis of Autism Spectrum Disorder
Previous Article in Journal
An Organizational and Governance Model to Support Mass Collaborative Learning Initiatives
Previous Article in Special Issue
Classification of Alzheimer’s Disease Using Dual-Phase 18F-Florbetaben Image with Rank-Based Feature Selection and Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Respiratory Rate Estimation Combining Autocorrelation Function-Based Power Spectral Feature Extraction with Gradient Boosting Algorithm

1
Department of Computer Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea
2
Department of Software Science & Engineering, Kunsan National University, 558 Daehak-ro, Gunsan-si 54150, Korea
3
Ingenium College, Kwangwoon University, 20 Kwangwoon-ro, Nowon-gu, Seoul 01897, Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8355; https://doi.org/10.3390/app12168355
Submission received: 2 July 2022 / Revised: 19 August 2022 / Accepted: 19 August 2022 / Published: 21 August 2022
(This article belongs to the Special Issue Deep Learning and Machine Learning in Biomedical Data)

Abstract

:
Various machine learning models have been used in the biomedical engineering field, but only a small number of studies have been conducted on respiratory rate estimation. Unlike ensemble models using simple averages of basic learners such as bagging, random forest, and boosting, the gradient boosting algorithm is based on effective iteration strategies. This gradient boosting algorithm is just beginning to be used for respiratory rate estimation. Based on this, we propose a novel methodology combining an autocorrelation function-based power spectral feature extraction process with the gradient boosting algorithm to estimate respiratory rate since we acquire the respiration frequency using the autocorrelation function-based power spectral feature extraction that finds the time domain’s periodicity. The proposed methodology solves overfitting for the training datasets because we obtain the data dimension by applying autocorrelation function-based power spectral feature extraction and then split the long-resampled wave signal to increase the number of input data samples. The proposed model provides accurate respiratory rate estimates and offers a solution for reliably managing the estimation uncertainty. In addition, the proposed method presents a more precise estimate than conventional respiratory rate measurement techniques.

1. Introduction

Respiratory rate (RR) is an important signal to monitor disease progression. Irregular RR is an essential indicator of diseases such as pneumonia, heart failure, and cardiac arrest [1]. Therefore, RR monitors at home and in hospitals can help clinicians diagnose patients and document their medical prognosis. There is medical evidence that rapid changes in RR may be used to predict potentially severe heart disease, such as sudden cardiac arrest or intensive care unit admission [1]. Hence, RRs are used in emergency room examinations and primary care to identify hypercapnia, pulmonary embolism, pneumonia, and sepsis. Nowadays, we can obtain RR by manually calculating chest wall movements. However, this procedure is inaccurate [2], time-consuming [3], and poorly performed [4]. Moreover, RR monitors are not typically used as medical wearable sensors. Therefore, there is an essential role for electronic, unobtrusive methods of measuring RR, such as estimating RR based on electrocardiogram (ECG) or estimating RR based on photoplethysmography (PPG) using wearable sensors [5,6,7]. ECG measures the current generated by the action potential of the myocardium during each heartbeat. In addition, the ECG monitor is integrated with wearable sensors to identify the patient’s heart rate (HR) and rhythm while walking [8]. Unlike ECG, PPG measures the amount of blood that changes over time in a tissue bed [9], illuminated by either ambient light or a supplementary light source [10]. PPG is used for continuous HR monitoring in fitness devices and critically ill patients. PPG-based devices have been developed for blood perfusion assessments and pulse transit time measurements [5]. However, standard instruments currently used to measure RR require monitoring CO 2 production based on capnography. This method is expensive and requires significant medical equipment management [11]. In addition, a nasal cannula or mask is also needed, which is a hassle for the patient.
As an alternative method, we can obtain an accurate RR from a pulse oximeter for saturation of partial pressure oxygen (SpO 2 ), which is user-friendly and economical [11]. Addison P.S. et al. developed an algorithm for estimating RR using pulse oximeter signals [12], which can be used as an economical method for measuring RR using pulse oximeter signals. Recently, researchers and users in the field of medical engineering are predicting SpO 2 using smart watches or wearable devices based on PPG technology. A characteristic of the PPG signal is that oscillations occur, so the peaks and troughs of the signal can be easily found on the time axis. Hence, we can detect the peaks and troughs of the PPG signal using methods such as primary, amplitude, and frequency modulations [13]. These modulation methods for predicting RR with PPG technology have been published using time-frequency spectrum estimation [11], sparse signal reconstruction, and continuous wavelet [14]. The continuous wavelet transform was introduced by Addison et al. [14]. These techniques estimate the RR within a PPG spectral domain. Currently, various techniques based on PPG signals are used for RR estimation. Unfortunately, few researchers have published RR prediction results using [14,15,16] machine learning techniques [17,18]. Liu S. et al. [18] published generative boosting with a long short-term memory (LSTM) network for RR estimation, including vital signals. LSTM technology has attracted attention in machine learning (ML). This technique has more benefits when dealing with the problem of time series [19,20]. This year, Kumar A. K et al. [21] introduced a framework for predicting RR from PPG and ECG signals based on LSTM. An ensemble-based gradient boosting algorithm (GBA) based on multi-phase features [22,23] was applied to improve the performance of RR prediction. In particular, several techniques such as autoregressive method [24], multiple fractal wavelet readers [25], wavelet packets [26], and maximum overlap discrete wavelet transform [27,28] were used to extract features to compensate for insufficient data. However, it is difficult to determine which features should be used and which ones are needed to obtain the optimal estimation rate. Ensemble GBA based on multi-phase features also has the disadvantage that extracting the features required for RR estimation from the PPG signal is inconvenient and time-consuming.
Another well-known machine learning (ML) model successfully used for estimation in the biomedical engineering field is support vector regression (SVR) [29,30]. SVR is known to have the advantage of effectively approximating nonlinear effects even with a small number of training datasets. However, SVR is more complex to tune than the recent GBAs discussed in [31]. Nevertheless, significant progress has been made in developing new ML models over the past decade, and one of the most effective approaches to estimation accuracy is ensemble ML algorithms [32]. An ensemble algorithm composes a model by training several basic models such as a decision tree and then combining them to produce a model with a higher estimation probability [31]. Unlike methods based on simple averages of basic learners such as bagging [33], random forest [34], and boosting [35], the GBA is based on effective iteration strategies. This ensemble ML model has been successfully used in many areas and is just beginning to be used for RR estimation [22]. Based on these backgrounds, we propose a novel methodology combining an autocorrelation function-based power spectral feature extraction process with the GBA (CAGBA) model to estimate RR. Here, we can obtain the respiration frequency using the autocorrelation function-based power spectral feature extraction that finds the time axis’s periodicity.
Our input data have a small sample size, which is a challenge when using deep learning techniques. In general, a small number of samples cannot guarantee the successful performance of ML algorithms, such as LSTM, because the LSTM algorithms have several nonlinear features and require large datasets to train these features. In practice, we hope to develop a CAGBA that can perform well even with a small sample of PPG signals. We can solve the problem of small data by applying automatic autocorrelation function-based power spectral feature extraction [36]. The proposed CAGBA solves overfitting for the training datasets. In detail, we extend the data dimension by applying automated feature extraction technology, and we then split the long-resampled wave signal to increase the number of input data samples, which solves the overfitting problem caused by the small data sample problem. Furthermore, the proposed CAGBA is computationally inexpensive compared to the LSTM model. Furthermore, our research provides an accurate RR estimate using the CAGBA process and offers a solution to reduce the estimated uncertainty. This work is one of the first studies to use CAGBA for RR estimation with a limited sample. This study contributes to the RR estimation field as follows.
  • We propose a novel method for RR estimation using CAGBA from limited PPG signal data. The key to this method is to extend the dimension of the input data using the autocorrelation function-based power spectral feature extraction process;
  • We split the long-resampled wave signal to increase the number of input data samples, which solves the overfitting problem caused by the small data sample problem;
  • The proposed method uses an autocorrelation function-based power spectral feature extraction process from the PPG signals in the time domain, which automatically extracts relevant features from the power spectral, learns, and then estimates the RR.

2. Dataset and Feature Extration

2.1. Collection PPG Signals

Our methodology uses two public biometric datasets as shown in Figure 1. We first compare the ML algorithm performance using the RRSYNTH dataset (http://peterhcharlton.github.io/RRest/syntheticdataset.html (accessed on 3 May 2021) [37]) with simulated PPG and ECG signals. The dataset consists of 192 wave signals, every 210 s in length, using a sampling frequency (Fs) of 500 Hz. There are three types of modulation: frequency modulation (FM), baseline wander (BW), and amplitude modulation (AM) [37]. Only AM data was used in this study because heart failure was associated with the pulse amplitude of the PPG signal. The AM of the PPG signal reduces the stroke volume during suction due to changes in the pressure in the chest, resulting in reduced pulse amplitude [37]. The decrease in stroke volume was known to be heart failure closely [38]. Therefore, RR was predicted from PPG signals using the AM-type signal modulation method. Upon closer inspection, 192 AM records were used to develop the proposed GBA; however, 64 AM records were not used because of their untampered AM properties.
Second, we use the BIDMC dataset (http://peterhcharlton.github.io/RRest/bidmcdataset.html (accessed on 10 June 2022) [37]), which is extracted from the MIMIC-II resource [39]. The BIDMC dataset consists of ECG, PPG, and impedance pneumography (IP) respiratory signals acquired from intensive care patients. The dataset consists of 53 recordings of PPG, ECG, and IP signals (Fs = 125 Hz) obtained from adult patients aged 19–90 years for 8 min. Patients in the dataset were randomly selected among a cohort of patients admitted to Beth Israel Deaconess Medical Center (BIDMC) in Boston, USA. Reference RR values were derived using two sets of annotations of individual breaths of IP signals.

2.2. Short Review of Multi-Phases and Various Feature Extraction (MF)

We extracted features from wavelet transform domains and used autoregressive techniques [24] based on segmented PPG signals. Wavelet transformation techniques were also used to compensate for the shortcomings of Fourier transformation, because they can analyze time and frequency information. A notable aspect of this technique is the extraction of important features from various methods. In this work, we used a parallel combination of AR techniques [24], wavelet packet entropy [26], and multifractal wavelet reader [25] and maximum overlap discrete wavelet transform [27,28]. We found that the patterns of the multivariate PPG signals could be distinguished in terms of their relationships, using the variability of each component in each PPG signal. Interested readers can refer to [22].

2.3. Preprocessing Steps for Feature Extraction

PPG techniques are commonly used in biomedical and related research fields for RR, HR estimation, SpO 2 , and blood pressure prediction. As shown in Figure 2, two PPG waveforms were acquired from the RRSYNTH dataset [37]. The upper panel (a) shows an example of breaths/minute (10 bpm), and the lower panel (b) shows an example of a high bpm (50 bpm). The dashed line of the PPG waveform signal represents the envelope between the PPG peak and trough. We can find fiducial points using these PPG peak and trough information for the resampled wave signals. Next, we used steps 7 to 11 to estimate the RR, as shown in Figure 1. Compared with the reference RRs (10 bpm and 50 bpm), error of the estimated RR in Figure 2a represents (≤±1.0 bpm), and error of the estimated RR in Figure 2b illustrates the accuracy of (≤±2.0 bpm). First, various high-frequency (HF) signals were reduced using a low-pass filter (Kaiser window function) with a 3 dB cutoff of 35 Hz, as shown in Figure 1. Next, the PPG signal was separated into pulses using the incremental merge segmentation technique [40], which is an adaptive pulse segmentation algorithm [40] for changing the PPG waveform into pulses and the automatic separation of artifacts, as shown in the third box in Figure 1. Next, fiducial points were identified from the PPG peaks and troughs, as shown in the fourth box on the left side of Figure 1. The irregular PPG signals were resampled to 5 Hz by using the linear interpolation method [40] as shown in the fifth box on the left side of Figure 1. Subsequently, the low-frequency (LF) signals were eliminated using a high-pass filter (Kaiser window function) with a 3 dB cutoff frequency. Finally, resampled waveform signals were acquired from PPG signals.
The RRSYNTH dataset of breathing signals was acquired from PPG and RR target data obtained from 192 records (participants). Each signal had a length of 210 s, and the sampling frequency (Fs) was 500 Hz. RR estimates were determined using the mean second between successive respirations in the Hamming window. Iterative tests were performed to predict the actual RR value as window size was changed to 16, 32, and 64 s. Finally, the proposed technology exhibited better performance at a window size of 32 s than at window sizes of 16 and 64 s [37,39]. The BIDMC dataset consists of ECG, PPG, and IP respiratory signals obtained from intensive care patients. The dataset consists of 53 recordings of PPG, ECG, and IP signals (Fs = 125 Hz) obtained from adult patients aged 19–90 years for 8 min. The preprocessing of the BIDMC dataset was performed the same way as the RRSYNTH dataset.

2.4. Autocorrelation Function-Based Power Spectral Feature Extraction Process

Preprocessing steps produced resampled wave signals from the PPG dataset for feature extraction, as shown in the boxes on the left side of Figure 1. Subsequently, the autocorrelation for feature extraction was calculated, as shown in the eighth box in Figure 1. The autocorrelation can be used to determine the periodicity in the time domain. The autocorrelation function relies on the difference between the discrete times n and n + m . If m = 0 (lag 0), the autocorrelation denotes the maximum value representing the total energy of the input value. Given the measurements z = { z n } n = 1 N at a discrete-time lag m, in practice, we define the autocorrelation function as
ρ m ( z ) = n = 1 N ( z n μ z ) ( z n + m μ z ) n = 1 N ( z n μ z ) 2
where z denotes a data point obtained from the segmented wave signal and μ z = n = 1 N z n denotes the mean. The power spectral is a fast Fourier transform of the autocorrelation function and effectively represents information about the time series of a biological signal [41]. The time-axis correspondence of a power spectral denotes an autocorrelation function because the time-axis autocorrelation function is equal to the square of the amplitude spectrum. Hence, we obtained the power spectral of all components within 0.1–2.5 Hz. The CAGBA model extracts relevant features from the power spectral, automatically learns, and subsequently estimates RR. A low frequency (≤0.5) with each spectral power can be an efficient feature for estimating RR. Each record had a length of 210 s, and the sampling frequency (Fs) was 500 Hz. Thus, 105,000 experimental sampled data points were prepared for each subject, as shown in Figure 2a,b. Figure 2 shows examples of PPG signals extracted for only 30 s out of 210 s of the 4th and 24th subject’s PPG signals. After preprocessing, a clean wave signal was prepared, as shown in Figure 3a. Next, the clean waveform (6 bpm) was split using a 32 s window, as shown in Figure 3b. The 32 s window did not overlap. Figure 3c shows that the segmented waveform is converted to an autocorrelation coefficient. Subsequently, the power spectral was obtained, as shown in Figure 3d. Finally, the features were acquired by connecting the six power spectral in Figure 3e. In addition to the aforementioned method, power spectral features can be obtained, as shown in Figure 3e, by connecting six power spectral in Figure 3d, which are the signals obtained from the 2th subjects. Specifically, we obtain the resample wave signal from the PPG signals 210 s and reshape into (6 × 32 s window). Next, we acquire the autocorrelation and obtain the power spectral 6 × 256 (=1542) data points using FFT as shown in Figure 3e.

3. Gradient Boosting Algorithm (GBA)

The GBA is a numerical optimization model [23] that aims to discover additive models that minimize cost functions. The training dataset are D = { x i , y i } i = 1 N and x R N × D and y R N × 1 , respectively, where N denotes the number of observation, and D is a number of feature dimensions. Here, the GBA’ goal is to obtain a prediction as
y j = f ( x j ) + ε , ε N ( 0 , σ 2 I )
where ε denotes a Gaussian noise with zero mean and unknown variance σ 2 and we define a regression model k of the mapping function f that minimize the expected cost function as
L ( k ) = E ( y , x ) P L ( k ( y , x ) ) = R L ( k ( y , x ) ) d P ( y , x ) ,
where P ( y , x ) denotes a joint probability and the cost function is given as L ( y , k ( x ) ) = ( y k ( x ) ) 2 . We can iteratively update the estimate y from x for the cost function L using the basic learner h ( x ) , improving the previous learner as follows:
k i ( x ) = k i 1 ( x ) + γ i h i ( x ) , i = 1 , . . . M
Here, M denotes a number of ensemble, γ is a weight for the basic learner, and we use a decision tree as the basic learner. We want to minimize the cost function L using gradient descent iteratively. The GBA model is shown as Algorithm 1. The regression tree is one of the most common machine-learning techniques that produce the same structure as the decision tree. Each internal node represents a functional test, wherein each branch represents one of the possible test results and each node represents a regression. Estimating errors are generally calculated using the difference between observations and estimates.
Algorithm 1: The GBA model for regression
  • Procedure: Training dataset ( D ); Estimated function k ( x )
  • Initialize k 0 ( x )
  • for j = 1 , L do
  •     g i ( j ) = L ( y i , k i 1 ( x i ) ) k i 1 ( x i )
  •    Compute the resdual g i as the partial derivative of the cost function
  •     L at all data point on training dataset D = { x i , y i } i = 1 N
  •    Generate a new regression tree h j ( x i ) based on { x i , g i ( j ) }
  •     γ j = arg min γ i = 1 N L ( y j , k j 1 ( x i ) ) + γ h j ( x i )
  •    Discover an optimal increment step γ j
  •     k j ( x ) = k j 1 ( x ) + γ j h j ( x )
  •    Update the regression model for estimation
  • end for
  • Return
  • k M ( x ) = j = 1 L γ j h j ( x ) = k L 1 ( x ) + γ L h L ( x )
  • End procedure

4. Results

In this experiment, first, a respiratory signal was obtained from the PPG signal using the two datasets. These data were randomly separated into 80% of the training and 20% of the test data. In addition, to evaluate the performance of the proposed algorithm, reference RR was calculated from the oral-nasal pressure signal using a custom respiration detection algorithm [37].
We summarized the main parameters of the proposed and conventional models when using the RRSYNTH dataset in Table 1. Using the BIDMC dataset, the main parameters are the same as in the RRSYNTH dataset, except for the dimensions of the features. This table adjusts the parameters for each process to achieve the best performance. Twenty total training and testing times were calculated using MATLAB ®2022 [42]. As shown in Table 2, the GBA models had a shorter execution time than the LSTM models and a higher total execution time than the SVR models. To evaluate the proposed method in comparison with the conventional methods, difference between the RR estimates is obtained experimentally. The reference RR values are summarized in Table 3 based on the mean absolute error (MAE) and standard deviation (SD) of the MAE. The MAE and SD results show the mean values from the 20 experiments. Lower SD and MAE values indicate a better performance than high SD and MAE values. Here, a new feature extraction process is also applied to SVR and LSTM methods, including the GBA model, which proposes a feature extraction method using the autocorrelation-based power spectral proposed for objective evaluation. As mentioned in the introduction, the proposed approach is called the CAGBA model. The results obtained using SVR with the multiphase feature extraction (SVRMF) and SVR with autocorrelation function-based power spectral feature (CASVR) models are listed in Table 3. The MAE result (2.89 bpm) of CASVR shows a higher performance than this (5.57 bpm) of SVRMF using the RRSYNTH dataset. Using the BIDMC, we see that the MAE results of the two models, SVRMF and CASVR, are very similar. The results obtained using LSTMs with multiphase feature extraction (LSTMMF) and LSTMs with autocorrelation function-based power spectral feature (CALSTM) [18] are shown in Table 3. The CALSTM (5.35) algorithm is shown to perform slightly better than the LSTMMF (5.63) using the RRSYNTH dataset. In addition, we check the MAE results (2.37 bpm vs. 2.54 bpm) of LSTMMF and CALSTM models when using the BIDMC data. The results obtained using a GBA with multiphase feature extraction (GBAMF) and the CAGBA model are suggested in Table 3. The MAE result (1.06 bpm) of CAGBA is compared with this (5.25) of GBAMF. We notice the MAE results (1.98 bpm) and (1.94 bpm) of GBAMF and CAGBA models using the BIDMC dataset.

ANOVA Test

ANOVA tests [43] are used to objectively compare and effectively assess the performance of the proposed CAGBA with the GBAMF, LSTMMF, CALSTM, SVRMF, and CASVR techniques. ANOVA is a statistical formula used to compare variance values in the arithmetic mean of the different groups. The hypothesis of interest for ANOVA is as follows [22]:
H 0 : Θ 1 = Θ 2 . . . = Θ j , H 1 : Θ 1 Θ 2 . . . Θ j
The null hypothesis of ANOVA indicates no difference in the means between the groups, and the alternative hypothesis implied that the means between the groups were not equal. Therefore, in this study, multiple comparisons between groups are used to determine the effects of other groups and their means. One-way ANOVA is a very compact linear model, given e i j = α j + ϵ i j . Here, e i j represents the experimental results (MAE) of CAGBA with GBAMF, LSTMMF, CALSTM, SVRMF, and CASVR, where i = 20 represents the number of test and groups j = 6 (number of model). We compare the MAE results between SVRMF and CASVR using the RRSYNTH dataset, as shown in Table 4. The total degree of freedom (df) is the total number of measurements (MAE) minus one: 39 (=40). The degree of freedom between groups is denoted as 1 (= 2 1 ). In Table 4, MS shows the mean squared error (sum of squares (SS) / df = 71.66), and the F-statistic shows the ratio of the mean squared error (71.66/0.61 = 118.02). The p-value, 3.25 × 10 13 , is the computed test statistic, that is, the probability of obtaining a value greater than P(F > 118.02). We also compare the MAE results of the LSTMMF and CALSTM models as shown in Table 4. Here, the p-value (0.319) shows a value greater than the significance level of (0.05). In Table 5, the MAE results of the GBAMF and CAGBA models are presented. We can see that the p-value ( 2.43 × 10 24 ) between GBAMF and CAGBA models is very small compared to the significance level of (0.05). Table 5 shows df(=59) and p-value (0.316) obtained from MAE results of SVRMF, LSTMMF, and GBAMF models using the RRSYNTH dataset. We then see the p-value ( 1.42 × 10 27 ) acquired from MAE results of CASVR, CALSTM, and CAGBA models using the RRSYNTH dataset and observe the p-value ( 4.69 × 10 45 ) obtained using six models as in Table 6. Table 7 displays the p-value (0.805) obtained from the MAE results of the SVRMF and CASVR models on the BIDMC dataset. We can see the p-value (0.355) acquired from the the MAE results of the LSTMMF and CALSTM models on the BIDMC dataset. Table 8 shows the p-value (0.814) obtained using the MAE results of the GBAMF and CAGBA models, and we noice p-value (0.036) obtained using the MAE results of the SVRMF, LSTMMF, and GBAMF models. Table 9 shows the p-value (0.0043) calculated using the MAE results of the CASVR, CALSTM, and CAGBA models. We then see the p-value (0.0022) obtained from the MAE results of all six models using the BIDMC dataset.
Figure 4a well displays the MAE results of the SVRMF and CASVR models; we see the MAE results of the LSTMMF and CALSTM models in Figure 4b; In Figure 4c shows the MAE results of the GBAMF and CAGBA models. We observe the MAE results of the SVRMF, LSTMMF, and GBAMF models in Figure 4d. Then, we show the MAE results of the CASVR, CALSTM, and CAGBA models in Figure 4e. We can see the MAE results of all six models based on the RRSYNTH in Figure 4f. Figure 5 shows the MAE results in the same order as Figure 4. Here, the results of Figure 4 are using the RRSYNTH dataset, and the results of Figure 5 are the results of experiments with the BIDMC dataset.

5. Discussion

This first study integrates autocorrelation function-based power spectral feature extraction with a GBA model-based approach to estimate respiration rate using PPG signals. The results show that the MAE of the proposed model is smaller than that of the rest of the models when using the RRSYNTH dataset. This result solves the overfitting problem caused by small data by dividing the long sample wave signal to compensate for the insufficient sample. In addition, the autocorrelation function-based power spectrum feature extraction technique is well applied to the GBA model using the RRSYNTH dataset. Although the proposed integrated model is not simple to construct because it consists of an ensemble algorithm and requires high computer resources, the proposed model is more effective in using computer resources than the LSTM model. We also observed that using various methods, the MF model consumed more feature extraction time than the CA model. Therefore, the CA method uses computer resources more efficiently.
On the other hand, when using the BIDMC dataset, the MF feature extraction method and the proposed CA feature extraction method were applied to all the SVR, LSTM, and GBA models, confirming that there was no difference through the p-value. Although there is no difference in the feature extraction technique, the proposed model is an excellent model for respiration rate estimation due to its low MAE and stable SD results based on both datasets. The overall evaluation results show that the performance of the proposed algorithm is more accurate than that obtained using the LSTM and SVR algorithms for respiration rate estimation.

Limitations

We experimented with two PPG datasets collected from the RRSYNTH and BIDMC datasets [37]. This study is limited due to the small number of records on a relatively small number of subjects. However, this limitation is addressed using autocorrelation function-based power spectral feature extraction from physiological respiration signals on the time axis, which automatically extracts relevant features from the power spectral. However, we cannot argue that all our experiments are consistent with those described above. Also, the algorithm is not detailed enough to replicate accurately in some cases. We should cross-validate using other public dataset.

6. Conclusions

In this study, we proposed a new technique using the autocorrelation function-based power spectral feature extraction with the GBA model to estimate respiration rate based on photoplethysmography signals. The autocorrelation function-based power spectral feature extraction was used to overcome the challenge of insufficient photoplethysmography signals. We build the automatic feature extraction process to increase the data dimension and then split the long-resampled wave signal to increase the number of input data samples. This solved the overfitting problem caused by small data samples. This study’s main contribution was using the autocorrelation function-based power spectral feature extraction with the GBA model, which is based on the automatic extraction of relevant features, to achieve higher stability and accuracy. The proposed methodology obtained lower MAEs and stable SDs for respiration rate estimation than the LSTM and SVR methods. The autocorrelation function-based power spectral feature extraction with the GBA model demonstrated excellent performance. Thus, this work provides a novel method for increasing the accuracy of respiration rate estimation and a solution to reduce estimation errors. Further experimental tests should be conducted on new patient populations in the future.

Author Contributions

Conceptualization, S.L. and G.L.; methodology, S.L.; software, S.L.; validation, S.L., G.L. and H.M.; formal analysis, C.-H.S.; investigation, C.-H.S.; resources, G.L.; data curation, H.M.; writing—original draft preparation, S.L.; writing—review and editing, H.M.; visualization, G.L.; supervision, C.-H.S.; project administration, G.L.; funding acquisition, C.-H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by Korea government (MSIT) (No. 2020R1A2C1010405) and The present research has been conducted by the Research Grant of Kwangwoon University in 2020.

Data Availability Statement

Acknowledgments

The present research has been conducted by the Research Grant of Kwangwoon University in 2020.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RRRespiratory rate
PPGPhotoplethysmography
ECGElectrocardiogram
HRHeart rate
SpO 2 Partial pressure oxygen
LSTMLong short-term memory
MLMachine learning
GBAGradient boosting algorithm
SVRSupport vector regression
CAGBAAutocorrelation function-based power spectral feature extraction process with the GBA
FSSampling frequency
FMFrequency modulation
BWBaseline wander
AMAmplitude modulation
HFHigh-frequency
LFLow-frequency
IPImpedance pneumography
MAEMean absolute error
SDStandard deviation
SVRMFSVR with the multiphase feature extraction
CASVRSVR with autocorrelation function-based power spectral feature
LSTMMFLSTMs with multiphase feature extraction
CALSTMLSTMs with autocorrelation function-based power spectral feature
GBAMFGBA with multiphase feature extraction
MFMultiphase feature extraction
CAAutocorrelation function-based power spectral feature extraction
dfdegree of freedom
MSMean squared error
SSSum of squares
BGBetween groups
WGWithin groups
BIDMCBeth Israel deaconess medical center

References

  1. Fieselmann, J.F.; Hendryx, M.S.; Helms, C.M.; Wakefield, D.S. Respiratory rate predicts cardiopulmonary arrest for internal medicine inpatients. J. Gen. Intern. Med. 1993, 8, 354–360. [Google Scholar] [CrossRef] [PubMed]
  2. Lovett, P.B.; Buchwald, J.M.; Stürmann, K.; Bijur, P. The vexatious vital: Neither clinical measurements by nurses nor an electronic monitor provides accurate measurements of respiratory rate in triage. Ann. Emerg. Med. 2005, 45, 68–76. [Google Scholar] [CrossRef] [PubMed]
  3. Philip, K.; Pack, E.; Cambiano, V.; Rollmann, H.; Weil, S.; O’Beirne, J. The accuracy of respiratory rate assessment by doctors in a London teaching hospital: A cross-sectional study. J. Clin. Monit. Comput. 2015, 29, 455–460. [Google Scholar] [CrossRef] [Green Version]
  4. Philip, K.; Richardson, R.; Cohen, M. Staff perceptions of respiratory rate measurement in a general hospital. Brit. J. Nurs. 2007, 22, 570–574. [Google Scholar] [CrossRef]
  5. Charlton, P.H.; Birrenkott, D.A.; Bonnici, T.; Pimentel, M.A.F.; Johnson, A.E.W.; Alastruey, J.; Tarassenko, L.; Watkinson, P.J.; Beale, R.; Clifton, D.A. Breathing Rate Estimation From the Electrocardiogram and Photoplethysmogram: A Review. IEEE Rev. Biomed. Eng. 2018, 11, 2–20. [Google Scholar] [CrossRef] [Green Version]
  6. Chan, M.; Ganti, V.G.; Inan, O.T. Respiratory rate estimation using u-net-based cascaded framework from electrocardiogram and seismocardiogram signals. IEEE J. Biomed. Health Inform. 2022, 26, 2481–2492. [Google Scholar] [CrossRef]
  7. Jarchi, D.; Rodgers, S.J.; Tarassenko, L.; Clifton, D.A. Accelerometrybased estimation of respiratory rate for post-intensive care patient monitoring. IEEE Sens. J. 2018, 18, 4981–4989. [Google Scholar] [CrossRef] [Green Version]
  8. Walsh, J.A.; Topol, E.J.; Steinhubl, S.R. Novel wireless devices for cardiac monitoring. Circulation 2014, 130, 573–581. [Google Scholar] [CrossRef] [PubMed]
  9. Allen, J. Photoplethysmography and its application in clinical physiological measurement. Physiol. Meas. 2007, 28, R1–R39. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Villarroel, M.; Guazzi, A.; Jorge, J.; Davis, S.; Watkinson, P.; Green, G.; Shenvi, A.; McCormick, K.; Tarassenko, L. Continuous non-contact vital sign monitoring in neonatal intensive care unit. Healthc. Technol. Lett. 2014, 1, 87–91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Chon, K.; Dash, S.; Ju, K. Estimation of Respiratory Rate From Photoplethysmogram Data Using Time–Frequency Spectral Estimation. IEEE Trans. Biomed. Eng. 2013, 56, 1946–1953. [Google Scholar] [CrossRef] [PubMed]
  12. Clifton, D.; Douglas, J.; Addison, P.; Watson, J. Measurement of respiratory rate from the photoplethysmogram in chest clinic patients. J. Clin. Monit. Comput. 2007, 21, 55–61. [Google Scholar] [CrossRef] [PubMed]
  13. Pimentel, M.A.F.; Johnson, A.E.W.; Charlton, P.H.; Birrenkott, D.; Watkinson, P.J.; Tarassenko, L.; Clifton, D.A. Toward a Robust Estimation of Respiratory Rate From Pulse Oximeters. IEEE Trans. Biomed. Eng. 2017, 64, 1914–1923. [Google Scholar] [CrossRef]
  14. Addison, P.S.; Watson, J.N.; Mestek, M.L.; Mecca, R.S. Developing an algorithm for pulse oximetry derived respiratory rate (RRoxi): A healthy volunteer study. J. Clin. Monit. Comput. 2012, 26, 45–51. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Khreis, S.; Ge, D.; Rahman, H.A.; Carrault, G. Breathing rate estimation using Kalman smoother with electrocardiogram and photoplethysmogram. IEEE Trans. Biomed. Eng. 2020, 67, 893–904. [Google Scholar] [CrossRef]
  16. Prathosh, A.P.; Praveena, P.; Mestha, L.K.; Bharadwaj, S. Estimation of respiratory pattern from video using selective ensemble aggregation. IEEE Trans. Signal Process. 2017, 65, 2902–2916. [Google Scholar] [CrossRef] [Green Version]
  17. Johansson, A. Neural network for photoplethysmographic respiratory ratemonitoring. Med. Biol. Eng. Comput. 2003, 41, 242–248. [Google Scholar] [CrossRef]
  18. Liu, S.; Yao, J.; Motani, M. Early prediction of vital signs using generative boosting via LSTM networks. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 437–444. [Google Scholar]
  19. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef]
  20. Gers, F.A.; Douglas, E.; Schmidhuber, J. Applying LSTM to time series predictable through time-window approaches. In Proceedings of the International Conference on Artificial Neural Networks, Vienna, Austria, 21–25 August 2001; pp. 669–676. [Google Scholar]
  21. Kumar, A.K.; Ritam, M.; Han, L.; Guo, S.; Chandra, R. Deep learning for predicting respiratory rate from biosignals. Comput. Biol. Med. 2022, 144, 1–12. [Google Scholar] [CrossRef]
  22. Lee, S.; Son, C.-H.; Albertini, M.K.; Fernandes, H.C. Multi-phases and various feature extraction and selection methodology for ensemble gradient boosting in estimating respiratory rate. IEEE Access 2020, 8, 125648–125658. [Google Scholar] [CrossRef]
  23. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Statist. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  24. Orfanidis, S. Optimum Signal Processing, 2nd ed.; McGraw-Hill: New York, NY, USA, 1988; pp. 147–193. Available online: https://www.ece.rutgers.edu/~orfanidi/osp2e/ (accessed on 10 May 2020).
  25. Leonarduzzi, R.F.; Schlotthauer, G.; Torres, M.E. Wavelet leader based multifractal analysis of heart rate variability during myocardial ischaemia. In Proceedings of the IEEE International Conference on Engineering in Medicine and Biology Society (EMBC), Buenos Aires, Argentina, 31 August–4 September 2010; pp. 110–113. [Google Scholar]
  26. Li, T.; Zhou, M. ECG classification using wavelet packet entropy and random forests. Entropy 2016, 18, 285. [Google Scholar] [CrossRef]
  27. Zhao, Q.; Zhang, L. ECG feature extraction and classification using wavelet transform and support vector machines. In Proceedings of the IEEE International Conference on Neural Networks and Brain, Beijing, China, 13–15 October 2005; pp. 1089–1092. [Google Scholar]
  28. Maharaj, E.A.; Alonso, A.M. Discriminant analysis of multivariate time series: Application to diagnosis based on ECG signals. Comput. Stat. Data Anal. 2014, 70, 67–87. [Google Scholar] [CrossRef]
  29. Lee, S.; Lee, G. Ensemble methodology for confidence interval in oscillometric blood pressure measurements. J. Med. Syst. 2020, 44, 1–9. [Google Scholar] [CrossRef]
  30. Rakotomamonjy, A. Analysis of SVM regression bound for variable ranking. Neurocomputing 2007, 70, 1489–1491. [Google Scholar] [CrossRef]
  31. Touzani, S.; Granderson, J.; Fernandes, S. Gradient boosting machine for modeling the energy consumption of commercial buildings. Energy Build. 2018, 158, 1533–1543. [Google Scholar] [CrossRef] [Green Version]
  32. Lee, S.; Dajani, H.R.; Rajan, S.; Lee, G.; Groza, V.Z. Uncertainty in blood pressure measurement estimated using ensemble-based recursive methodology. Sensors 2020, 20, 2108. [Google Scholar] [CrossRef] [Green Version]
  33. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  34. Breiman, L. Random forests. Mach. Learn. 2001, 5, 5–32. [Google Scholar] [CrossRef] [Green Version]
  35. Schapire, R.E. The strength of weak learnability. Mach. Learn. 1990, 5, 197–227. [Google Scholar] [CrossRef] [Green Version]
  36. Larsen, J. Correlation Functions and Power Spectra; Technical University of Denmark: Lyngby, Denmark, 2006. [Google Scholar]
  37. Charlton, P.H. et al. An assessment of algorithms to estimate respiratory rate from the electrocardiogram and photoplethysmogram. Physiol. Meas. 2016, 37, 610–626. [Google Scholar] [CrossRef] [PubMed]
  38. Cohn, J.N. Structural basis for heart failure: Ventricular remodeling and its pharmacological inhibition. Circulation 1995, 91, 2504–2507. [Google Scholar] [CrossRef] [PubMed]
  39. Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff., J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.-K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Karlen, W.; Raman, S.; Ansermino, J.M.; Dumont, G.A. Multiparameter respiratory rate estimation from the photoplethysmogram. IEEE Trans. Biomed. Eng. 2013, 60, 1946–1953. [Google Scholar] [CrossRef] [PubMed]
  41. Saeed, V.V. Advanced Digital Signal Processing and Noise Reduction; John Wiley & Sons, Ltd.: London, UK, 2000. [Google Scholar]
  42. Knapp-Cordes, M.; McKeeman, B. Improvements to tic and toc functions for measuring absolute elapsed time performance in MATLAB. In Matlab Technical Articles and Newsletters; The MathWorks Inc.: Natick, MA, USA, 2011; Available online: https://kr.mathworks.com/company/newsletters/articles/improvements-to-tic-and-toc-functions-for-measuring-absoluteelapsed-time-performance-in-matlab.html (accessed on 20 May 2020).
  43. Bailey, R.A. Design of Comparative Experiments; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
Figure 1. Block diagram of the proposed methodology using CAGBA methodology.
Figure 1. Block diagram of the proposed methodology using CAGBA methodology.
Applsci 12 08355 g001
Figure 2. Upper panel (a) is an example (10 breath per minute) 4th record PPG signal, and lower panel (b) is an example (50 breath per minute) 24th record PPG signal, where the 4th record represents the 4th of 192 records.
Figure 2. Upper panel (a) is an example (10 breath per minute) 4th record PPG signal, and lower panel (b) is an example (50 breath per minute) 24th record PPG signal, where the 4th record represents the 4th of 192 records.
Applsci 12 08355 g002
Figure 3. Top panel (a) is a resampled wave signal from the 2nd subject PPG signal, panel (b) is a segmented wave signal from the resampled wave signal (a), panel (c) denotes an autocorrelation signal from the segmented wave signal (b), panel (d) is a power spectral from (c) the autocorrelation signal, and bottom panel (e) denotes the power spectral feature (PSF) from the power spectral (d).
Figure 3. Top panel (a) is a resampled wave signal from the 2nd subject PPG signal, panel (b) is a segmented wave signal from the resampled wave signal (a), panel (c) denotes an autocorrelation signal from the segmented wave signal (b), panel (d) is a power spectral from (c) the autocorrelation signal, and bottom panel (e) denotes the power spectral feature (PSF) from the power spectral (d).
Applsci 12 08355 g003
Figure 4. The box below represents the MAE and SD compared to the reference RR method obtained using the RRSYNTH dataset; where (a) is the MAE results of SVRMF and CASVR, (b) denote the MAE results of LSTMMF and CALSTM, (c) is the MAE results of GBAMF and CAGBA, and (d) denote the MAE results of SVRMF, LSTMMF, and GBAMF, (e) is the the MAE results of CASVR, CALSTM, and CAGBA, and (f) is the MAE results of all models.
Figure 4. The box below represents the MAE and SD compared to the reference RR method obtained using the RRSYNTH dataset; where (a) is the MAE results of SVRMF and CASVR, (b) denote the MAE results of LSTMMF and CALSTM, (c) is the MAE results of GBAMF and CAGBA, and (d) denote the MAE results of SVRMF, LSTMMF, and GBAMF, (e) is the the MAE results of CASVR, CALSTM, and CAGBA, and (f) is the MAE results of all models.
Applsci 12 08355 g004
Figure 5. The box below represents the MAE and SD compared to the reference RR method obtained using the BIDMC dataset; where (a) is the MAE results of SVRMF and CASVR, (b) denote the MAE results of LSTMMF and CALSTM, (c) is the MAE results of GBAMF and CAGBA, and (d) denote the MAE results of SVRMF, LSTMMF, and GBAMF, (e) is the the MAE results of CASVR, CALSTM, and CAGBA, and (f) is the MAE results of all models.
Figure 5. The box below represents the MAE and SD compared to the reference RR method obtained using the BIDMC dataset; where (a) is the MAE results of SVRMF and CASVR, (b) denote the MAE results of LSTMMF and CALSTM, (c) is the MAE results of GBAMF and CAGBA, and (d) denote the MAE results of SVRMF, LSTMMF, and GBAMF, (e) is the the MAE results of CASVR, CALSTM, and CAGBA, and (f) is the MAE results of all models.
Applsci 12 08355 g005
Table 1. The core parameters used in the proposed and conventional methods were summarized using the RRSYNTH dataset.
Table 1. The core parameters used in the proposed and conventional methods were summarized using the RRSYNTH dataset.
ParametersSVRMFCASVRLSTMMFCALSTMGBAMFCAGBA
Dimension of Feature279154227915422791542
Dimension of Output111111
Epsilon33 1.0 × 10 8 1.0 × 10 8 --
KernelFunctionGau.Gau.--Con.Con.
KernelScaleautoauto----
Number of Hidden Unit--200–300200–300--
FullyConnectdLayer--5050--
Dropout--0.2–0.50.2–0.6--
MaxEpoch--300300--
Solver--adamadam--
GrandientThreshold--11--
ShrinkageFactor----0.05–0.30.05–0.3
SubsamplingFactor----0.1–0.30.1–0.3
MaxTreeDepth----4–64–6
Max Iterations----20002000
Table 2. Twenty total training and testing times were compared between the conventional and proposed methodology, where the specifications of the computer system are Intel®Core(TM) i5-9400 CPU 4.1 GHz, RAM 16.0 GB, OS 64 bit, and Matlab®2022 (The MathWorks Inc., Natick, MA, USA).
Table 2. Twenty total training and testing times were compared between the conventional and proposed methodology, where the specifications of the computer system are Intel®Core(TM) i5-9400 CPU 4.1 GHz, RAM 16.0 GB, OS 64 bit, and Matlab®2022 (The MathWorks Inc., Natick, MA, USA).
DatasetUnitSVRMFCASVRLSTMMFCALSTMGBAMFCAGBA
RRSYNTH(s)60.4710.4294.1592.7575.2823.60
BIDMC(s)51.568.7692.8189.3562.3317.32
Table 3. The RR estimation results were obtained using the SVR, LSTM, and GBA models, which were calculated as the difference from the reference RR method to express it as the MAE and SD evaluation method, where the MF denotes the multiphase feature extraction and CA is the autocorrelation function-based power spectral feature extraction.
Table 3. The RR estimation results were obtained using the SVR, LSTM, and GBA models, which were calculated as the difference from the reference RR method to express it as the MAE and SD evaluation method, where the MF denotes the multiphase feature extraction and CA is the autocorrelation function-based power spectral feature extraction.
DatasetErrorsSVRMFCASVRLSTMMFCALSTMGBAMFCAGBA
RRSYNTHMAE5.572.895.635.355.251.06
SD0.060.150.440.160.420.41
BIDMCMAE2.012.052.372.541.981.94
SD0.450.580.600.560.480.61
Table 4. Using the RRSYNTH dataset, the ANOVA results of the left side columns were obtained from the SVRMF and CASVR; the right side columns were obtained from LSTMMF and CALSTM models, where SS denotes a sum of squares, df is the degree of freedom, MS is the mean squared error, BG is between groups, and WG indicates within groups.
Table 4. Using the RRSYNTH dataset, the ANOVA results of the left side columns were obtained from the SVRMF and CASVR; the right side columns were obtained from LSTMMF and CALSTM models, where SS denotes a sum of squares, df is the degree of freedom, MS is the mean squared error, BG is between groups, and WG indicates within groups.
SourceSSdfMSFp ValueSSdfMSFp Value
BG71.66171.66118.02 3.25 × 10 13 0.7710.771.020.319
WG23.07380.61--28.75380.76--
Total94.7339---29.5239---
Table 5. Using the RRSYNTH dataset, the ANOVA results of the left side columns were obtained from the GBAMF and CAGBA; the right side columns were obtained from SVRMF, LSTMMF and GBAMF models.
Table 5. Using the RRSYNTH dataset, the ANOVA results of the left side columns were obtained from the GBAMF and CAGBA; the right side columns were obtained from SVRMF, LSTMMF and GBAMF models.
SourceSSdfMSFp ValueSSdfMSFp Value
BG175.901175.90559.68 2.43 × 10 24 1.6420.821.170.316
WG11.94380.31--39.81570.70--
Total187.8439---41.4559---
Table 6. Using the RRSYNTH dataset, the ANOVA results were obtained from the CASVR, CALSTM, and the proposed CAGBA models; the right side columns were obtained from SVRMF, CASVR, LSTMMF, CALSTM, GBAMF, and CAGBA models.
Table 6. Using the RRSYNTH dataset, the ANOVA results were obtained from the CASVR, CALSTM, and the proposed CAGBA models; the right side columns were obtained from SVRMF, CASVR, LSTMMF, CALSTM, GBAMF, and CAGBA models.
SourceSSdfMSFp ValueSSdfMSFp Value
BG185.70292.85220.91 1.42 × 10 27 357.68571.54127.89 4.69 × 10 45
WG23.96570.42--63.771140.559
Total209.6659---421.45119
Table 7. Using the BIDMC dataset, the ANOVA results of the left side columns were obtained from the SVRMF and CASVR; the right side columns were obtained from LSTMMF and CALSTM models.
Table 7. Using the BIDMC dataset, the ANOVA results of the left side columns were obtained from the SVRMF and CASVR; the right side columns were obtained from LSTMMF and CALSTM models.
SourceSSdfMSFp ValueSSdfMSFp Value
BG0.0210.020.060.8050.3010.300.880.355
WG10.24380.27--12.87380.34--
Total10.2639---13.1739---
Table 8. Using the BIDMC dataset, the ANOVA results of the left side columns were obtained from the GBAMF and CAGBA; the right side columns were obtained from SVRMF, LSTMMF and GBAMF models.
Table 8. Using the BIDMC dataset, the ANOVA results of the left side columns were obtained from the GBAMF and CAGBA; the right side columns were obtained from SVRMF, LSTMMF and GBAMF models.
SourceSSdfMSFp ValueSSdfMSFp Value
BG0.0210.020.060.8141.8820.943.530.036
WG11.61380.31--15.20570.27--
Total11.6339---17.0859---
Table 9. Using the BIDMC dataset, the ANOVA results of the left side columns were obtained from the CASVR, CALSTM, and CAGBA; the right side columns were obtained from SVRMF, CASVR, LSTMMF, CALSTM, GBAMF, and CAGBA models.
Table 9. Using the BIDMC dataset, the ANOVA results of the left side columns were obtained from the CASVR, CALSTM, and CAGBA; the right side columns were obtained from SVRMF, CASVR, LSTMMF, CALSTM, GBAMF, and CAGBA models.
SourceSSdfMSFp ValueSSdfMSFp Value
BG4.1222.066.010.00436.1051.2240.0022
WG19.52570.34--34.721140.304
Total23.6459---40.82119
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, S.; Moon, H.; Son, C.-H.; Lee, G. Respiratory Rate Estimation Combining Autocorrelation Function-Based Power Spectral Feature Extraction with Gradient Boosting Algorithm. Appl. Sci. 2022, 12, 8355. https://doi.org/10.3390/app12168355

AMA Style

Lee S, Moon H, Son C-H, Lee G. Respiratory Rate Estimation Combining Autocorrelation Function-Based Power Spectral Feature Extraction with Gradient Boosting Algorithm. Applied Sciences. 2022; 12(16):8355. https://doi.org/10.3390/app12168355

Chicago/Turabian Style

Lee, Soojeong, Hyeonjoon Moon, Chang-Hwan Son, and Gangseong Lee. 2022. "Respiratory Rate Estimation Combining Autocorrelation Function-Based Power Spectral Feature Extraction with Gradient Boosting Algorithm" Applied Sciences 12, no. 16: 8355. https://doi.org/10.3390/app12168355

APA Style

Lee, S., Moon, H., Son, C. -H., & Lee, G. (2022). Respiratory Rate Estimation Combining Autocorrelation Function-Based Power Spectral Feature Extraction with Gradient Boosting Algorithm. Applied Sciences, 12(16), 8355. https://doi.org/10.3390/app12168355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop