Classification of Heart Sounds Using Chaogram Transform and Deep Convolutional Neural Network Transfer Learning
Abstract
:1. Introduction
2. Proposed Approach
2.1. Preprocessing
2.2. Transformation of One-Dimensional PCG Signal into a Chaogram Image
2.3. DCNN Model
3. Experimental Setup and Results
3.1. Experimental Setup
3.2. Dataset
3.3. Evaluation Metrics
3.4. Experimental Results
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhang, S.; Zhang, R.; Chang, S.; Liu, C.; Sha, X. A low-noise-level heart sound system based on novel thorax-integration head design and wavelet denoising algorithm. Micromachines 2019, 10, 885. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Xu, C.; Li, H.; Xin, P. Research on Heart Sound Denoising Method Based on CEEMDAN and Optimal Wavelet. In Proceedings of the 2022 2nd IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 14–16 January 2022; pp. 629–632. [Google Scholar]
- Abduh, Z.; Nehary, E.A.; Wahed, M.A.; Kadah, Y.M. Classification of heart sounds using fractional fourier transform based mel-frequency spectral coefficients and traditional classifiers. Biomed. Signal Process. Control 2020, 57, 101788. [Google Scholar] [CrossRef]
- Chowdhury, T.H.; Poudel, K.N.; Hu, Y. Time-frequency analysis, denoising, compression, segmentation, and classification of PCG signals. IEEE Access 2020, 8, 160882–160890. [Google Scholar] [CrossRef]
- Deng, M.; Meng, T.; Cao, J.; Wang, S.; Zhang, J.; Fan, H. Heart sound classification based on improved MFCC features and convolutional recurrent neural networks. Neural Netw. 2020, 130, 22–32. [Google Scholar] [CrossRef]
- Hajihashemi, V.; Gharahbagh, A.A.; Cruz, P.M.; Ferreira, M.C.; Machado, J.J.; Tavares, J.M. Binaural Acoustic Scene Classification Using Wavelet Scattering, Parallel Ensemble Classifiers and Nonlinear Fusion. Sensors 2022, 22, 1535. [Google Scholar] [CrossRef]
- Arslan, Ö.; Karhan, M. Effect of Hilbert-Huang transform on classification of PCG signals using machine learning. J. King Saud Univ.-Comput. Inf. Sci. 2022. [Google Scholar] [CrossRef]
- Chen, P.; Zhang, Q. Classification of heart sounds using discrete time-frequency energy feature based on S transform and the wavelet threshold denoising. Biomed. Signal Process. Control 2020, 57, 101684. [Google Scholar] [CrossRef]
- Li, J.; Ke, L.; Du, Q. Classification of heart sounds based on the wavelet fractal and twin support vector machine. Entropy 2019, 21, 472. [Google Scholar] [CrossRef] [Green Version]
- Sawant, N.K.; Patidar, S.; Nesaragi, N.; Acharya, U.R. Automated detection of abnormal heart sound signals using Fano-factor constrained tunable quality wavelet transform. Biocybern. Biomed. Eng. 2021, 41, 111–126. [Google Scholar] [CrossRef]
- Zeng, W.; Yuan, J.; Yuan, C.; Wang, Q.; Liu, F.; Wang, Y. A new approach for the detection of abnormal heart sound signals using TQWT, VMD and neural networks. Artif. Intell. Rev. 2021, 54, 1613–1647. [Google Scholar] [CrossRef]
- Hajihashemi, V.; Alavigharahbagh, A.; Oliveira, H.S.; Cruz, P.M.; Tavares, J.M. Novel Time-Frequency Based Scheme for Detecting Sound Events from Sound Background in Audio Segments. In Iberoamerican Congress on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2021; pp. 402–416. [Google Scholar] [CrossRef]
- Alshamma, O.; Awad, F.H.; Alzubaidi, L.; Fadhel, M.A.; Arkah, Z.M.; Farhan, L. Employment of multi-classifier and multi-domain features for PCG recognition. In Proceedings of the 2019 12th IEEE International Conference on Developments in eSystems Engineering (DeSE), Kazan, Russia, 7–10 October 2019; pp. 321–325. [Google Scholar]
- Chen, W.; Sun, Q.; Chen, X.; Xie, G.; Wu, H.; Xu, C. Deep learning methods for heart sounds classification: A systematic review. Entropy 2021, 23, 667. [Google Scholar] [CrossRef] [PubMed]
- Avanzato, R.; Beritelli, F. Heart sound multiclass analysis based on raw data and convolutional neural network. IEEE Sens. Lett. 2020, 4, 1–4. [Google Scholar] [CrossRef]
- Deperlioglu, O. Heart sound classification with signal instant energy and stacked autoencoder network. Biomed. Signal Process. Control 2021, 64, 102211. [Google Scholar] [CrossRef]
- Er, M.B. Heart sounds classification using convolutional neural network with 1D-local binary pattern and 1D-local ternary pattern features. Appl. Acoust. 2021, 180, 108152. [Google Scholar] [CrossRef]
- Xu, Y.; Xiao, B.; Bi, X.; Li, W.; Zhang, J.; Ma, X. Pay more attention with fewer parameters: A novel 1-D convolutional neural network for heart sounds classification. In Proceedings of the 2018 IEEE Computing in Cardiology Conference (CinC), Maastricht, The Netherlands, 23–26 September 2018; Volume 45, pp. 1–4. [Google Scholar]
- Bakhshi, A.; Harimi, A.; Chalup, S. CyTex: Transforming speech to textured images for speech emotion recognition. Speech Commun. 2022, 139, 62–75. [Google Scholar] [CrossRef]
- Li, S.; Li, F.; Tang, S.; Luo, F. Heart sounds classification based on feature fusion using lightweight neural networks. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
- Khare, S.K.; Bajaj, V. Time–frequency representation and convolutional neural network-based emotion recognition. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2901–2909. [Google Scholar] [CrossRef]
- Lopac, N.; Hržić, F.; Vuksanović, I.P.; Lerga, J. Detection of Non-Stationary GW Signals in High Noise From Cohen’s Class of Time–Frequency Representations Using Deep Learning. IEEE Access 2021, 10, 2408–2428. [Google Scholar] [CrossRef]
- Arias-Vergara, T.; Klumpp, P.; Vasquez-Correa, J.C.; Nöth, E.; Orozco-Arroyave, J.R.; Schuster, M. Multi-channel spectrograms for speech processing applications using deep learning methods. Pattern Anal. Appl. 2021, 24, 423–431. [Google Scholar] [CrossRef]
- Ismail, S.; Ismail, B.; Siddiqi, I.; Akram, U. PCG classification through spectrogram using transfer learning. Biomed. Signal Process. Control 2023, 79, 104075. [Google Scholar] [CrossRef]
- Huang, Q. Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification. AI 2022, 3, 180–193. [Google Scholar] [CrossRef]
- Novac, P.E.; Boukli Hacene, G.; Pegatoquet, A.; Miramond, B.; Gripon, V. Quantization and deployment of deep neural networks on microcontrollers. Sensors 2022, 21, 2984. [Google Scholar] [CrossRef] [PubMed]
- Falahzadeh, M.R.; Farokhi, F.; Harimi, A.; Sabbaghi-Nadooshan, R. Deep convolutional neural network and gray wolf optimization algorithm for speech emotion recognition. Circuits Syst. Signal Process. 2022, 1–44. [Google Scholar] [CrossRef]
- Whitaker, B.M.; Suresha, P.B.; Liu, C.; Clifford, G.D.; Anderson, D.V. Combining sparse coding and time-domain features for heart sound classification. Physiol. Meas. 2017, 38, 1701. [Google Scholar] [CrossRef] [PubMed]
- Shekofteh, Y.; Almasganj, F. Autoregressive modeling of speech trajectory transformed to the reconstructed phase space for ASR purposes. Digit. Signal Process. 2013, 23, 1923–1932. [Google Scholar] [CrossRef]
- Harimi, A.; Fakhr, H.S.; Bakhshi, A. Recognition of emotion using reconstructed phase space of speech. Malays. J. Comput. Sci. 2016, 29, 262–271. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Zheng, J.; Lu, C.; Hao, C.; Chen, D.; Guo, D. Improving the generalization ability of deep neural networks for cross-domain visual recognition. IEEE Trans. Cogn. Dev. Syst. 2020, 13, 607–620. [Google Scholar] [CrossRef]
- Hao, C.; Chen, D. Software/Hardware Co-design for Multi-modal Multi-task Learning in Autonomous Systems. In Proceedings of the 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington, DC, USA, 6–9 June 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Brochu, E.; Cora, V.M.; De Freitas, N. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv 2010, arXiv:1012.2599. [Google Scholar]
- Chollet, F. Deep Learning with Python; Manning: New York, NY, USA, 2018; Volume 361. [Google Scholar]
- Liu, C.; Springer, D.; Li, Q.; Moody, B.; Juan, R.A.; Chorro, F.J.; Castells, F.; Roig, J.M.; Silva, I.; Johnson, A.E.; et al. An open access database for the evaluation of heart sound algorithms. Physiol. Meas. 2016, 37, 2181. [Google Scholar] [CrossRef] [PubMed]
- Milani, M.G.; Abas, P.E.; De Silva, L.C.; Nanayakkara, N.D. Abnormal heart sound classification using phonocardiography signals. Smart Health 2016, 21, 100194. [Google Scholar] [CrossRef]
- Zhong, W.; Liao, L.; Guo, X.; Wang, G. A deep learning approach for fetal QRS complex detection. Physiol. Meas. 2018, 39, 045004. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.S.; Seo, M.; Kim, S.W.; Choi, M. Fetal QRS detection based on convolutional neural networks in noninvasive fetal electrocardiogram. In Proceedings of the 2018 4th International Conference on Frontiers of Signal Processing (ICFSP), Poitiers, France, 24–27 September 2018; pp. 75–78. [Google Scholar]
- Vo, K.; Le, T.; Rahmani, A.M.; Dutt, N.; Cao, H. An efficient and robust deep learning method with 1-D octave convolution to extract fetal electrocardiogram. Sensors 2020, 20, 3757. [Google Scholar] [CrossRef]
- Krupa, A.J.; Dhanalakshmi, S.; Lai, K.W.; Tan, Y.; Wu, X. An IoMT enabled deep learning framework for automatic detection of fetal QRS: A solution to remote prenatal care. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 7200–7211. [Google Scholar]
Hyperparameter | AlexNet | VGG16 | InceptionV3 | ResNet50 |
---|---|---|---|---|
Optimizer function | Adam | RMSprop | Adamax | Adam |
Learning Rate | ||||
Batch size | 98 | 64 | 102 | 86 |
Epoch | 260 | 320 | 280 | 260 |
Network | Sensitivity | Specificity | Accuracy | Score |
---|---|---|---|---|
AlexNet | 82.55 | 91.21 | 89.68 | 86.88 |
VGG16 | 83.36 | 91.49 | 90.05 | 87.43 |
InceptionV3 | 84.49 | 91.63 | 90.36 | 88.06 |
ResNet50 | 83.68 | 91.39 | 90.02 | 87.54 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Harimi, A.; Majd, Y.; Gharahbagh, A.A.; Hajihashemi, V.; Esmaileyan, Z.; Machado, J.J.M.; Tavares, J.M.R.S. Classification of Heart Sounds Using Chaogram Transform and Deep Convolutional Neural Network Transfer Learning. Sensors 2022, 22, 9569. https://doi.org/10.3390/s22249569
Harimi A, Majd Y, Gharahbagh AA, Hajihashemi V, Esmaileyan Z, Machado JJM, Tavares JMRS. Classification of Heart Sounds Using Chaogram Transform and Deep Convolutional Neural Network Transfer Learning. Sensors. 2022; 22(24):9569. https://doi.org/10.3390/s22249569
Chicago/Turabian StyleHarimi, Ali, Yahya Majd, Abdorreza Alavi Gharahbagh, Vahid Hajihashemi, Zeynab Esmaileyan, José J. M. Machado, and João Manuel R. S. Tavares. 2022. "Classification of Heart Sounds Using Chaogram Transform and Deep Convolutional Neural Network Transfer Learning" Sensors 22, no. 24: 9569. https://doi.org/10.3390/s22249569
APA StyleHarimi, A., Majd, Y., Gharahbagh, A. A., Hajihashemi, V., Esmaileyan, Z., Machado, J. J. M., & Tavares, J. M. R. S. (2022). Classification of Heart Sounds Using Chaogram Transform and Deep Convolutional Neural Network Transfer Learning. Sensors, 22(24), 9569. https://doi.org/10.3390/s22249569