Congestive Heart Failure Category Classification Using Neural Networks in Short-Term Series
Abstract
:1. Introduction
- The importance of the classification of cardiovascular time series is linked to information that helps decision making. Typically, studies concentrate on analyzing long-term series (24 h of records) [43,44,45], giving less attention to the investigation of short time series. However, in this research, we aim to broaden the study of the application of neural networks for classifying congestive heart failure using short records of heart rate variability (RR intervals);
- We compare three different approaches for congestive heart failure conditions: Multilayer MLP network, RVFL network, and ELM;
- We show that, for different congestive heart failure conditions, the output models provide misclassifications when classical variants of neural networks are used. However, using coupling, some deep feature maps with the RVFL and MLP models allow us to obtain a very high simulation accuracy.
2. Models Based on Neural Networks
2.1. Multilayer Perceptron Neural Network
2.2. Random Vector Functional Link Network
2.3. Convolutional Neural Network
3. Materials and Methods
3.1. Congestive Heart Failure
3.2. Selection and Preprocessing of RR Intervals
- Consider a finite time series of length N, where , …, N;
- Progressing from to , the s value is calculated as
- If satisfies the following condition:
- Finally, compute the new time series as
3.3. Environment
3.4. Performance Metrics
3.5. Fuzzy Activation Function
4. Results and Discussion
4.1. Selection of Learning Models Based on Neural Networks
- ELM and RVFL networks: These two non-iterative architectures learned nonlinear features thanks to the sigmoid function , chosen for its efficiency in this class of algorithms [56,66]. The random weights and biases in the hidden layer followed a uniform distribution in the range [55]. In the training phase, the and grids were considered to estimate the optimal values of C (regularization constant) and L (hidden neurons), respectively. Table 3 shows the optimal value of the hyperparameters after an exhaustive search. It is worth mentioning that the same idea was configured for each database designed in this study;
- MLP Network: Table 3 also includes the best MLP model for each database, choosing hyperparameters through an empirical or manual setup. We adopted this fast training phase because some experiments showed non-significant differences between a fine and empirical fit. The over-fitting problem was controlled by adding the -norm to the cost function in Equation (1). The Adam optimizer was the training algorithm, and the ReLU function produced the non-linear feature learning;
- CNN approach: To achieve more stable results in the classification phase of cardiovascular disease, we focused on a CNN model for high-level feature extraction. The optimal feature representation can be obtained by adding convolutional layers to the CNN. Our research proposes a CNN model with seven convolutional layers, each followed by a pooling layer. The output of each of the convolutional layers was batch-normalized, and then the method learned non-linear features through the ReLU function. Finally, we concatenated a fully connected MLP with an additional convolutional layer for the final classification of the model. In the fully connected layer, the loss function considered was the crossentropyex function for the three mutually exclusive classes of congestive heart failure. The CNN network was trained with the SGDM (Stochastic Gradient Descent with Momentum) algorithm. Table 4 presents the optimal hyperparameters considered, while Table 5 shows the topology of the proposed CNN model. In this deep learning approach, both topology and hyperparameters were hand-picked to speed up the training stage. The term “padding same” in the convolutional layer indicates that the output dimension does not change with respect to the input after applying a filter or kernel.To explore the extraction of deep features, experimental tests were conducted by linking some feature maps with an RVFL architecture. Specifically, the inputs of the RVFL network were the outputs of the layers: pooling 3, pooling 4, pooling 5, pooling 6, and pooling 7. We named these models according to the pooling layer, such as CNN-,RVFL3, CNN-,RVFL4, CNN-,RVFL5, CNN-,RVFL6, and CNN-,RVFL7. In addition, we decided to run several training stages by assigning the epochs a number. In this experimental breakdown, the fuzzy function replaces the sigmoid function for the non-linear feature mapping, with C and L estimated manually in Table 4;
- Five-fold cross-validation scheme: In order to corroborate the performance of the models discussed above, some experiments were repeated using the five-fold cross-validation technique of machine learning. This training and testing strategy does not consider fixed parts of the dataset, an important condition to achieve unbiased and precise evaluation metrics. In fact, the short time series were distributed over five folds (each fold with 20% of the database). Each fold was used as a test set, while the four folds remaining were used for training. For each designed database, the overall results were the average of the five runs. In view of the poor learning of the non-iterative and MLP models in cardiovascular classification, the next section includes some results of the CNN approach. To compare the performance of the CNN learning system without cross-validation, the experiments inherited the setup included in Table 4 and Table 5.
4.2. Accuracy Analysis for the Classification of Cardiovascular Diseases
- Performance of CNN for five-fold cross-validation: Here, the paper details the generalization ability of CNN in Table 9 and Table 10 under the five-fold cross-validation criterion. As before, 64 and 32 were the number of epochs considered for the first and second scenario databases, respectively. The classification of cardiovascular disease with this training and testing rule was comparable with the scheme without cross-validation (see Table 7 and Table 8). The random initialization of parameters (weights and biases) and the average value of the five runs explain the bounded variation between evaluation metrics. This proves the performance of the proposed CNN method and the importance of deep feature extraction in the final model decision.
4.3. Limitations and Recommendations for Further Research
- The diagnosis of complex cardiac diseases may require more features to achieve acceptable results. Considering the CNN classifier and the other conventional methods employed, the processing time increases with the length of the time series;
- The performance of the CNN approach is affected by the number of features used in the short time series, addressed in the literature as an additional hyperparameter by trial and error. This may introduce a certain degree of arbitrariness in the final classification of the model;
- The samples classified according to the length of the time series used in the experiments come from a single institution. It could be desirable to validate the model with other open-access databases.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
MLP | Multilayer Perceptron |
ELM | Extreme Learning Machine |
RVFL | Random Vector Functional Link |
CNN | Convolutional Neural Network |
ReLU | Rectified Linear Unit |
PPV | Positive Predictive Value |
TP | True Positive |
FP | False Positive |
TN | True Negative |
FN | False Negative |
SGDM | Stochastic Gradient Descent with Momentum |
CHF | Congestive Heart Failure |
NYHA | New York Heart Association |
HRV | Heart Rate Variability |
ECG | Electrocardiogram |
EMD | Empirical Mode Decomposition |
DWT | Discrete Wavelet Transform |
MFDFA | Multifractal Detrended Fluctuation Analysis |
DFA | Detrended Fluctuation Analysis |
WT | Wavelet Analysis |
References
- Glass, L.; Hunter, P. There is a theory of heart. Phys. D Nonlinear Phenom. 1990, 43, 1–16. [Google Scholar] [CrossRef]
- Malik, M. Heart rate variability: Standards of measurement, physiological interpretation, and clinical use: Task force of the European Society of Cardiology and the North American Society for Pacing and Electrophysiology. Ann. Noninvasive Electrocardiol. 1996, 1, 151–181. [Google Scholar] [CrossRef]
- Sung, C.W.; Shieh, J.S.; Chang, W.T.; Lee, Y.W.; Lyu, J.H.; Ong, H.N.; Chen, W.T.; Huang, C.H.; Chen, W.J.; Jaw, F.S. Machine learning analysis of heart rate variability for the detection of seizures in comatose cardiac arrest survivors. IEEE Access 2020, 8, 160515–160525. [Google Scholar] [CrossRef]
- Chiew, C.J.; Liu, N.; Tagami, T.; Wong, T.H.; Koh, Z.X.; Ong, M.E. Heart rate variability based machine learning models for risk prediction of suspected sepsis patients in the emergency department. Medicine 2019, 98, e14197. [Google Scholar] [CrossRef]
- Agliari, E.; Barra, A.; Barra, O.A.; Fachechi, A.; Franceschi Vento, L.; Moretti, L. Detecting cardiac pathologies via machine learning on heart-rate variability time series and related markers. Sci. Rep. 2020, 10, 8845. [Google Scholar] [CrossRef]
- Miglis, M. Chapter 12 - Sleep and the Autonomic Nervous System. In Sleep and Neurologic Disease; Miglis, M.G., Ed.; Academic Press: San Diego, CA, USA, 2017; pp. 227–244. [Google Scholar] [CrossRef]
- Chapter 27-Ambulatory Electrocardiography. In Chou’s Electrocardiography in Clinical Practice, 6th ed.; Surawicz, B.; Knilans, T.K. (Eds.) W.B. Saunders: Philadelphia, PA, USA, 2008; pp. 631–645. [Google Scholar] [CrossRef]
- Bak, P.; Tang, C.; Wiesenfeld, K. Self-Organized Criticality. Phys. Rev. A 1988, 38, 364–374. [Google Scholar] [CrossRef]
- Lin, D. Robustness and perturbation in the modeled cascade heart rate variability. Phys. Rev. E. Stat. Nonlinear Soft Matter Phys. 2003, 67, 031914. [Google Scholar] [CrossRef]
- Kiyono, K.; Struzik, Z.; Aoyagi, N.; Sakata, S.; Hayano, J.; Yamamoto, Y. Critical Scale Invariance in a Healthy Human Heart Rate. Phys. Rev. Lett. 2004, 93, 178103. [Google Scholar] [CrossRef]
- Kotani, K.; Struzik, Z.; Takamasu, K.; Stanley, H.; Yamamoto, Y. Model for complex heart rate dynamics in health and diseases. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2005, 72, 041904. [Google Scholar] [CrossRef]
- Makowiec, D.; Galaska, R.; Dudkowska, A.; Rynkiewicz, A.; Zwierz, M. Long-range dependencies in heart rate signals—Revisited. Phys. A Stat. Mech. Its Appl. 2006, 369, 632–644. [Google Scholar] [CrossRef]
- Makowiec, D.; Dudkowska, A.; Galaska, R.; Rynkiewicz, A. Multifractal estimates of monofractality in RR-heart series in power spectrum ranges. Phys. A Stat. Mech. Its Appl. 2009, 388, 3486–3502. [Google Scholar] [CrossRef]
- Makowiec, D.; Rynkiewicz, A.; Galaska, R.; Wdowczyk, J.; Zarczynska-Buchowiecka, M. Reading multifractal spectra: Aging by multifractal analysis of heart rate. Epl (Europhys. Lett.) 2011, 94, 68005. [Google Scholar] [CrossRef]
- Hadase, M.; Azuma, A.; Zen, K.; Asada, S.; Kawasaki, T.; Kamitani, T.; Kawasaki, S.; Sugihara, H.; Matsubara, H. Very Low Frequency Power of Heart Rate Variability is a Powerful Predictor of Clinical Prognosis in Patients with Congestive Heart Failure. Circ. J. 2004, 68, 343–347. [Google Scholar] [CrossRef]
- Usui, H.; Nishida, Y. The very low-frequency band of heart rate variability represents the slow recovery component after a mental stress task. PLoS ONE 2017, 12, e0182611. [Google Scholar] [CrossRef]
- Serrador, J.M.; Finlayson, H.C.; Hughson, R.L. Physical activity is a major contributor to the ultra low frequency components of heart rate variability. Heart (Br. Card. Soc.) 1999, 82. [Google Scholar] [CrossRef]
- Rodríguez-Liñares, L.; Lado, M.; Vila, X.; Méndez, A.; Cuesta, P. gHRV: Heart Rate Variability analysis made easy. Comput. Methods Programs Biomed. 2014, 116, 26–38. [Google Scholar] [CrossRef]
- Flevari, K.; Vagiakis, E.; Zakynthinos, S. Heart rate variability is augmented in patients with positional obstructive sleep apnea, but only supine LF/HF index correlates with its severity. Sleep Breath. 2014, 19, 359–367. [Google Scholar] [CrossRef]
- Ebrahimi, F.; Setarehdan, S.K.; Ayala-Moyeda, J.; Nazeran, H. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals. Comput. Methods Programs Biomed. 2013, 112, 47–57. [Google Scholar] [CrossRef]
- Nayak, S.K.; Jarzębski, M.; Gramza-Michałowska, A.; Pal, K. Automated Detection of Cannabis-Induced Alteration in Cardiac Autonomic Regulation of the Indian Paddy-Field Workers Using Empirical Mode Decomposition, Discrete Wavelet Transform and Wavelet Packet Decomposition Techniques with HRV Signals. Appl. Sci. 2022, 12, 10371. [Google Scholar] [CrossRef]
- Lee, K.H.; Byun, S. Age Prediction in Healthy Subjects Using RR Intervals and Heart Rate Variability: A Pilot Study Based on Deep Learning. Appl. Sci. 2023, 13, 2932. [Google Scholar] [CrossRef]
- Eltahir, M.M.; Hussain, L.; Malibari, A.A.; K. Nour, M.; Obayya, M.; Mohsen, H.; Yousif, A.; Ahmed Hamza, M. A Bayesian dynamic inference approach based on extracted gray level co-occurrence (GLCM) features for the dynamical analysis of congestive heart failure. Appl. Sci. 2022, 12, 6350. [Google Scholar] [CrossRef]
- Zhang, Y.; Wei, S.; Zhang, L.; Liu, C. Comparing the Performance of Random Forest, SVM and Their Variants for ECG Quality Assessment Combined with Nonlinear Features. J. Med. Biol. Eng. 2018, 39, 381–392. [Google Scholar] [CrossRef]
- Karpagachelvi, S.; Arthanari, M.; Sivakumar, M. Classification of electrocardiogram signals with support vector machines and extreme learning machine. Neural Comput. Appl. 2012, 21, 1331–1339. [Google Scholar] [CrossRef]
- Zhou, X.; Zhu, X.; Nakamura, K.; Noro, M. Electrocardiogram quality assessment with a generalized deep learning model assisted by conditional generative adversarial networks. Life 2021, 11, 1013. [Google Scholar] [CrossRef]
- Hannun, A.Y.; Rajpurkar, P.; Haghpanahi, M.; Tison, G.H.; Bourn, C.; Turakhia, M.P.; Ng, A.Y. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 2019, 25, 65–69. [Google Scholar] [CrossRef]
- Brisk, R.; Bond, R.; Banks, E.; Piadlo, A.; Finlay, D.; McLaughlin, J.; McEneaney, D. Deep learning to automatically interpret images of the electrocardiogram: Do we need the raw samples? J. Electrocardiol. 2019, 57, S65–S69. [Google Scholar] [CrossRef]
- Sinnecker, D. A deep neural network trained to interpret results from electrocardiograms: Better than physicians? Lancet Digit. Health 2020, 2, e332–e333. [Google Scholar] [CrossRef]
- Ihsanto, E.; Ramli, K.; Sudiana, D.; Gunawan, T.S. Fast and accurate algorithm for ECG authentication using residual depthwise separable convolutional neural networks. Appl. Sci. 2020, 10, 3304. [Google Scholar] [CrossRef]
- Naeem, S.; Ali, A.; Qadri, S.; Khan Mashwani, W.; Tairan, N.; Shah, H.; Fayaz, M.; Jamal, F.; Chesneau, C.; Anam, S. Machine-learning based hybrid-feature analysis for liver cancer classification using fused (MR and CT) images. Appl. Sci. 2020, 10, 3134. [Google Scholar] [CrossRef]
- Yan, H.; Jiang, Y.; Zheng, J.; Peng, C.; Li, Q. A multilayer perceptron-based medical decision support system for heart disease diagnosis. Expert Syst. Appl. 2006, 30, 272–281. [Google Scholar] [CrossRef]
- Gupta, P.; Seth, D. Early Detection of Heart Disease Using Multilayer Perceptron. In Micro-Electronics and Telecommunication Engineering: Proceedings of 6th ICMETE 2022; Springer Nature: Singapore, 2023; pp. 309–315. [Google Scholar]
- He, W.; Xie, Y.; Lu, H.; Wang, M.; Chen, H. Predicting coronary atherosclerotic heart disease: An extreme learning machine with improved salp swarm algorithm. Symmetry 2020, 12, 1651. [Google Scholar] [CrossRef]
- Saputra, D.C.E.; Sunat, K.; Ratnaningsih, T. A new artificial intelligence approach using extreme learning machine as the potentially effective model to predict and analyze the diagnosis of anemia. Healthcare 2023, 11, 697. [Google Scholar] [CrossRef] [PubMed]
- Flores, J.; Loaeza, R.; Rodriguez Rangel, H.; González-santoyo, F.; Romero, B.; Gómez, A. Financial Time Series Forecasting Using a Hybrid Neural Evolutive Approach. In Proceedings of the the XV SIGEF International Conference, Lugo, Spain, 3–8 October 2009. [Google Scholar]
- Alba, E.; Mendoza, M. Bayesian Forecasting Methods for Short Time Series. Foresight Int. J. Appl. Forecast. 2007, 8, 41–44. [Google Scholar]
- Ernst, J.; Nau, G.; Bar-Joseph, Z. Clustering Short Time Series Gene Expression Data. Bioinformatics 2005, 21 (Suppl. S1), i159–i168. [Google Scholar] [CrossRef]
- López, J.L.; Contreras, J.G. Performance of multifractal detrended fluctuation analysis on short time series. Phys. Rev. E 2013, 87, 022918. [Google Scholar] [CrossRef]
- López, J.; Hernández, S.; Urrutia, A.; López-Cortés, X.; Araya, H.; Morales-Salinas, L. Effect of missing data on short time series and their application in the characterization of surface temperature by detrended fluctuation analysis. Comput. Geosci. 2021, 153, 104794. [Google Scholar] [CrossRef]
- Kleiger, R.E.; Stein, P.K.; Bosner, M.S.; Rottman, J.N. Time domain measurements of heart rate variability. Cardiol. Clin. 1992, 10, 487–498. [Google Scholar] [CrossRef]
- The Look AHEAD Research Group. Long-term effects of a lifestyle intervention on weight and cardiovascular risk factors in individuals with type 2 diabetes mellitus: Four-year results of the Look AHEAD trial. Arch. Intern. Med. 2010, 170, 1566–1575. [Google Scholar]
- Wang, T.; Lu, C.; Sun, Y.; Yang, M.; Liu, C.; Ou, C. Automatic ECG classification using continuous wavelet transform and convolutional neural network. Entropy 2021, 23, 119. [Google Scholar] [CrossRef]
- Rahul, J.; Sora, M.; Sharma, L.D.; Bohat, V.K. An improved cardiac arrhythmia classification using an RR interval-based approach. Biocybern. Biomed. Eng. 2021, 41, 656–666. [Google Scholar] [CrossRef]
- Faust, O.; Kareem, M.; Ali, A.; Ciaccio, E.J.; Acharya, U.R. Automated arrhythmia detection based on RR intervals. Diagnostics 2021, 11, 1446. [Google Scholar] [CrossRef] [PubMed]
- Heidari, A.A.; Faris, H.; Mirjalili, S.; Aljarah, I.; Mafarja, M. Ant lion optimizer: Theory, literature review, and application in multi-layer perceptron neural networks. Nat.-Inspired Optim. Theor. Lit. Rev. Appl. 2020, 811, 23–46. [Google Scholar]
- Afzal, S.; Ziapour, B.M.; Shokri, A.; Shakibi, H.; Sobhani, B. Building energy consumption prediction using multilayer perceptron neural network-assisted models; comparison of different optimization algorithms. Energy 2023, 282, 128446. [Google Scholar] [CrossRef]
- Lima-Junior, F.R.; Carpinetti, L.C.R. Predicting supply chain performance based on SCOR® metrics and multilayer perceptron neural networks. Int. J. Prod. Econ. 2019, 212, 19–38. [Google Scholar] [CrossRef]
- Wijnhoven, R.G.; de With, P. Fast training of object detection using stochastic gradient descent. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 424–427. [Google Scholar]
- Übeyli, E.D. Combined neural network model employing wavelet coefficients for EEG signals classification. Digit. Signal Process. 2009, 19, 297–308. [Google Scholar] [CrossRef]
- Pao, Y.H.; Phillips, S.M.; Sobajic, D.J. Neural-net computing and the intelligent control of systems. Int. J. Control. 1992, 56, 263–289. [Google Scholar] [CrossRef]
- Zhang, L.; Suganthan, P.N. A comprehensive evaluation of random vector functional link networks. Inf. Sci. 2016, 367, 1094–1105. [Google Scholar] [CrossRef]
- Rao, C.R.; Mitra, S.K. Further contributions to the theory of generalized inverse of matrices and its applications. Sankhyā Indian J. Stat. Ser. A 1971, 33, 289–300. [Google Scholar]
- Malik, A.; Gao, R.; Ganaie, M.; Tanveer, M.; Suganthan, P.N. Random vector functional link network: Recent developments, applications, and future directions. Appl. Soft Comput. 2023, 143, 110377. [Google Scholar] [CrossRef]
- Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
- Vásquez-Coronel, J.A.; Mora, M.; Vilches, K. A Review of multilayer extreme learning machine neural networks. Artif. Intell. Rev. 2023, 19, 1–52. [Google Scholar] [CrossRef]
- Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Modi, K.; Ghayvat, H. CNN variants for computer vision: History, architecture, application, challenges and future scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
- Khanday, N.Y.; Sofi, S.A. Deep insight: Convolutional neural network and its applications for COVID-19 prognosis. Biomed. Signal Process. Control. 2021, 69, 102814. [Google Scholar] [CrossRef] [PubMed]
- Boureau, Y.L.; Ponce, J.; LeCun, Y. A theoretical analysis of feature pooling in visual recognition. In Proceedings of the International Conference on Machine Learning (ICML), Haifa, Israel, 21–24 June 2010; pp. 111–118. [Google Scholar]
- Wang, T.; Wu, D.J.; Coates, A.; Ng, A.Y. End-to-end text recognition with convolutional neural networks. In Proceedings of the International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 3304–3308. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part I 13. 2014; pp. 818–833. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Cameron, M.H. Physical Rehabilitation; W.B. Saunders: Saint Louis, MI, USA, 2007. [Google Scholar] [CrossRef]
- Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 2000, 101, 215–220. [Google Scholar] [CrossRef]
- Goel, T.; Nehra, V.; Vishwakarma, V.P. An adaptive non-symmetric fuzzy activation function-based extreme learning machines for face recognition. Arab. J. Sci. Eng. 2017, 42, 805–816. [Google Scholar] [CrossRef]
- Liu, X.; Xu, L. The universal consistency of extreme learning machine. Neurocomputing 2018, 311, 176–182. [Google Scholar] [CrossRef]
Class | Limitation | Description |
---|---|---|
NYHA I | Relative | Patients that have no limitation to physical activity. |
NYHA II | Relative | Patients with cardiac disease that results in slight limitation to physical activity, with symptoms such as fatigue, palpations, dyspnea, or angina pain. |
NYHA III | Absolute | Patients with cardiac disease who are comfortable at rest; however, less-than-ordinary activity causes fatigue, palpation, dyspnea, or angina pain. |
NYHA IV | Absolute | Patients with cardiac disease that results in the inability to carry out any physical activity. |
Database Name | Description | Number of Subjects Studied | Number of Short Time Series | Length of Short Time Series |
---|---|---|---|---|
Congestive Heart Failure RR Interval | Beat annotation files (about 24 h each) from 29 subjects with congestive heart failure (NYHA classes I, II, and III) | 29 | 3663, 10,494 | 512, 1024, 2048 |
Scenarios | Signal Length | Hyperparameter | ||
---|---|---|---|---|
MLP | ELM | RVFL | ||
First scenario: databases with 3663 samples | 512 | Cross-entropy loss, ReLU activation, 64 mini-batches, a learning rate of , -regularization of , Adam optimizer, 50 epochs, and three hidden layers (50, 100, and 40 neurons). | sigmoid activation, , . | sigmoid activation, , . |
1024 | MLP model within 512 with a learning rate of and three hidden layers (100, 100, and 40 neurons). | sigmoid activation, , . | sigmoid activation, , . | |
2048 | MLP model within 512 with 60 epochs, -regularizer of and three hidden layers (50, 20, and 100 neurons). | sigmoid activation, , . | sigmoid activation, , . | |
Second scenario: 10,494 short time series | 512 | MLP model within 512 with 75 epochs, -regularizer of and two hidden layers (50 and 20 neurons). | sigmoid activation, , . | sigmoid activation, , . |
1024 | MLP model within 512 with 75 epochs, -regularizer of and two hidden layers (50 and 20 neurons). | sigmoid activation, , . | sigmoid activation, , . | |
2048 | MLP model within 512 with 37 epochs, -regularizer of , a learning rate of and two hidden layers (70 and 40 neurons). | sigmoid activation, , . | sigmoid activation, , . |
Scenarios | Signal Length | Hyperparameter | |
---|---|---|---|
CNN–MLP | CNN–RVFL | ||
First scenario and second scenario | 512, 1024, and 2048 | crossentropyex loss, ReLU activation, 64 mini-batches, a learning rate of 0.022, 0.099 as momentum, -regularizer of 0.03, SGDM optimizer. | Hyperparameters of the feature map and RVFL (fuzzy activation, , and ). |
Layer Type | Filter Size | Stride | Padding | Activation |
---|---|---|---|---|
Convolutional | 1 | same | ReLU | |
Max Pooling | 2 | — | — | |
Convolutional | 1 | same | ReLU | |
Max Pooling | 2 | — | — | |
Convolutional | 1 | same | ReLU | |
Max Pooling | 2 | — | — | |
Convolutional | 1 | same | ReLU | |
Max Pooling | 2 | — | — | |
Convolutional | 1 | same | ReLU | |
Max Pooling | 2 | — | — | |
Convolutional | 1 | same | ReLU | |
Max Pooling | 2 | — | — | |
Convolutional | 1 | same | ReLU | |
Max Pooling | 2 | — | — | |
Convolutional | 1 | same | ReLU | |
Fully connected | 3 | — | — | Softmax |
Scenarios | Signal Length | Overall Accuracy (%) | ||
---|---|---|---|---|
MLP | ELM | RVFL | ||
First scenario: databases with 3663 samples | 512 | 51.64 | 53.55 | 54.23 |
1024 | 50.27 | 54.51 | 54.10 | |
2048 | 49.18 | 54.64 | 53.83 | |
Second scenario: databases with 10,494 samples | 512 | 50.52 | 51.91 | 50.29 |
1024 | 50.05 | 50.57 | 50.00 | |
2048 | 50.09 | 50.62 | 49.71 |
Scenarios | Signal Length | Epochs | Overall Accuracy (%) | |||||
---|---|---|---|---|---|---|---|---|
CNN–MLP | CNN–RVFL3 | CNN–RVFL4 | CNN–RVFL5 | CNN–RVFL6 | CNN–RVFL7 | |||
First scenario: databases with 3663 samples | 512 | 8 | ||||||
16 | ||||||||
32 | ||||||||
64 | ||||||||
1024 | 8 | |||||||
16 | ||||||||
32 | ||||||||
64 | ||||||||
2048 | 8 | |||||||
16 | ||||||||
32 | ||||||||
64 | ||||||||
Second scenario: databases with 10,494 samples | 512 | 8 | ||||||
16 | ||||||||
32 | ||||||||
1024 | 8 | |||||||
16 | ||||||||
32 | ||||||||
2048 | 8 | |||||||
16 | ||||||||
32 |
Scenarios | Signal Length | Class I (%) | Class II (%) | Class III (%) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Acc | PPV | Sen | Spe | Acc | PPV | Sen | Spe | Acc | PPV | Sen | Spe | ||
First scenario: databases with 3663 samples | 512 | 82.92 | 65.23 | 82.27 | 83.18 | 86.47 | 89.41 | 74.04 | 94.41 | 94.26 | 92.08 | 90.57 | 96.11 |
1024 | 98.36 | 97.66 | 97.66 | 98.74 | 99.73 | 100 | 99.16 | 100 | 98.63 | 97.50 | 98.31 | 98.78 | |
2048 | 99.04 | 97.66 | 99.60 | 98.75 | 99.73 | 99.58 | 99.58 | 99.80 | 99.32 | 100 | 97.96 | 100 | |
Second scenario: databases with 10,494 samples | 512 | 97.99 | 95.37 | 98.69 | 97.66 | 99.76 | 99.72 | 99.58 | 99.86 | 98.05 | 98.65 | 95.35 | 99.36 |
1024 | 97.47 | 98.04 | 94.72 | 98.97 | 98.90 | 96.81 | 100 | 98.36 | 98.38 | 97.44 | 97.74 | 98.82 | |
2048 | 98.23 | 96.35 | 98.42 | 98.14 | 99.48 | 99.31 | 99.31 | 99.64 | 98.52 | 98.80 | 96.62 | 99.44 |
Scenarios | Signal Length | Overall Accuracy (%) | |||||
---|---|---|---|---|---|---|---|
CNN–MLP | CNN–RVFL3 | CNN–RVFL4 | CNN–RVFL5 | CNN–RVFL6 | CNN–RVFL7 | ||
First scenario: databases with 3663 samples | 512 | 81.96 | 63.09 | 69.53 | 78.52 | 82.56 | 83.01 |
1024 | 97.51 | 64.01 | 72.87 | 86.21 | 96.10 | 97.58 | |
2048 | 96.00 | 65.66 | 74.01 | 85.68 | 95.10 | 96.58 | |
Second scenario: databases with 10,494 samples | 512 | 96.38 | 65.44 | 76.45 | 86.09 | 96.78 | 97.00 |
1024 | 97.10 | 71.73 | 83.27 | 90.00 | 96.72 | 97.60 | |
2048 | 97.90 | 72.87 | 84.04 | 89.69 | 98.31 | 98.40 |
Scenarios | Signal Length | Class I (%) | Class II (%) | Class III (%) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Acc | PPV | Sen | Spe | Acc | PPV | Sen | Spe | Acc | PPV | Sen | Spe | ||
First scenario: databases with 3663 samples | 512 | 84.22 | 79.49 | 71.86 | 90.51 | 87.63 | 78.82 | 86.10 | 88.40 | 92.06 | 88.22 | 88.40 | 94.12 |
1024 | 97.78 | 96.65 | 96.91 | 98.23 | 99.11 | 99.13 | 98.13 | 99.59 | 98.12 | 96.79 | 97.51 | 98.43 | |
2048 | 98.16 | 96.71 | 98.09 | 98.23 | 99.86 | 98.75 | 99.59 | 99.21 | 98.29 | 97.85 | 96.82 | 99.00 | |
Second scenario: databases with 10,494 samples | 512 | 96.74 | 96.43 | 93.74 | 98.20 | 98.70 | 97.90 | 98.19 | 98.96 | 97.32 | 95.06 | 97.15 | 97.40 |
1024 | 97.32 | 96.61 | 95.31 | 98.33 | 98.89 | 97.91 | 98.80 | 98.85 | 97.99 | 96.94 | 97.21 | 98.38 | |
2048 | 97.00 | 96.00 | 98.07 | 97.94 | 99.56 | 99.69 | 98.99 | 99.84 | 98.26 | 98.12 | 96.64 | 99.07 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
López, J.L.; Vásquez-Coronel, J.A. Congestive Heart Failure Category Classification Using Neural Networks in Short-Term Series. Appl. Sci. 2023, 13, 13211. https://doi.org/10.3390/app132413211
López JL, Vásquez-Coronel JA. Congestive Heart Failure Category Classification Using Neural Networks in Short-Term Series. Applied Sciences. 2023; 13(24):13211. https://doi.org/10.3390/app132413211
Chicago/Turabian StyleLópez, Juan L., and José A. Vásquez-Coronel. 2023. "Congestive Heart Failure Category Classification Using Neural Networks in Short-Term Series" Applied Sciences 13, no. 24: 13211. https://doi.org/10.3390/app132413211
APA StyleLópez, J. L., & Vásquez-Coronel, J. A. (2023). Congestive Heart Failure Category Classification Using Neural Networks in Short-Term Series. Applied Sciences, 13(24), 13211. https://doi.org/10.3390/app132413211