Automatic User Preferences Selection of Smart Hearing Aid Using BioAid
Abstract
:1. Introduction
2. Literature Review
2.1. Hearing Assistive Devices
2.2. Scene Classification
3. Methodology
4. Results
4.1. Results of Acoustic Scene Classification
4.2. Results of Preset Selection
4.3. Comparison with Related Work
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Cunningham, L.L.; Tucci, D.L. Hearing loss in adults. N. Engl. J. Med. 2017, 377, 2465–2473. [Google Scholar] [CrossRef] [PubMed]
- Duthey, B. Background Paper 6.21 Hearing Loss; WHO: Geneva, Switzerland, 2013; Volume 20. [Google Scholar]
- Smith, A.W. WHO activities for prevention of deafness and hearing impairment in children. Scand. Audiol. 2001, 30, 93–100. [Google Scholar] [CrossRef] [PubMed]
- Pittman, A.L.; Stelmachowicz, P.G. Hearing loss in children and adults: Audiometric configuration, asymmetry, and progression. Ear Hear. 2003, 24, 198. [Google Scholar] [CrossRef] [Green Version]
- Pal, A.; Kamath, G.; Ranjan, A.; Sinha, N. Design of Smart Hearing Aid. 2017. Available online: https://www.semanticscholar.org/paper/DESIGN-OF-SMART-HEARING-AID-Ranjan-Pal/ecdee14b1fe4e2c361d0274846a513e47a4c3df0 (accessed on 30 January 2022).
- Levitt, H. A historical perspective on digital hearing aids: How digital technology has changed modern hearing aids. Trends Amplif. 2007, 11, 7–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Neuman, A.C.; Levitt, H.; Mills, R.; Schwander, T. An evaluation of three adaptive hearing aid selection strategies. J. Acoust. Soc. Am. 1987, 82, 1967–1976. [Google Scholar] [CrossRef] [PubMed]
- Kuk, F.K.; Pape, N.M.C. The reliability of a modified simplex procedure in hearing aid frequency-response selection. J. Speech Lang. Hear. Res. 1992, 35, 418–429. [Google Scholar] [CrossRef]
- Zhang, T.; Mustiere, F.; Micheyl, C. Intelligent Hearing Aids: The Next Revolution. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 72–76. [Google Scholar]
- Hamacher, V.; Chalupper, J.; Eggers, J.; Fischer, E.; Kornagel, U.; Puder, H.; Rass, U. Signal processing in high-end hearing aids: State of the art, challenges, and future trends. EURASIP J. Adv. Signal Process. 2005, 2005, 152674. [Google Scholar] [CrossRef] [Green Version]
- Shojaeemend, H.; Ayatollahi, H. Automated audiometry: A review of the implementation and evaluation methods. Healthc. Inform. Res. 2018, 24, 263–275. [Google Scholar] [CrossRef]
- Clark, N.R.; Brown, G.J.; Jürgens, T.; Meddis, R. A frequency-selective feedback model of auditory efferent suppression and its implications for the recognition of speech in noise. J. Acoust. Soc. Am. 2012, 132, 1535–1541. [Google Scholar] [CrossRef] [Green Version]
- Ferry, R.T.; Meddis, R. A computer model of medial efferent suppression in the mammalian auditory system. J. Acoust. Soc. Am. 2007, 122, 3519–3526. [Google Scholar] [CrossRef]
- Meddis, R.; Lecluyse, W.; Clark, N.R.; Jürgens, T.; Tan, C.M.; Panda, M.R.; Brown, G.J. A computer model of the auditory periphery and its application to the study of hearing. Basic Asp. Hear. 2013, 787, 11–20. [Google Scholar]
- Clark, D.N. The Biologically Inspired Hearing Aid. Available online: http://bioaid.org.uk/ (accessed on 28 May 2022).
- Brown, G.J.; Ferry, R.T.; Meddis, R. A computer model of auditory efferent suppression: Implications for the recognition of speech in noise. J. Acoust. Soc. Am. 2010, 127, 943–954. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dillon, H. Hearing Aids; Hodder Arnold: London, UK, 2008. [Google Scholar]
- Sudharsan, B.; Chockalingam, M. A microphone array and voice algorithm based smart hearing aid. arXiv 2019, arXiv:1908.07324. [Google Scholar] [CrossRef]
- Kumar, A.; Ganesh, D. Hearing impaired aid and sound quality improvement using bone conduction transducer. In Proceedings of the 2017 International Conference of Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 20–22 April 2017; pp. 390–392. [Google Scholar]
- Li, Y.; Chen, F.; Sun, Z.; Weng, Z.; Tang, X.; Jiang, H.; Wang, Z. A smart binaural hearing aid architecture based on a mobile computing platform. Electronics 2019, 8, 811. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Chen, F.; Sun, Z.; Ji, J.; Jia, W.; Wang, Z. A smart binaural hearing aid architecture leveraging a smartphone APP with deep-learning speech enhancement. IEEE Access 2020, 8, 56798–56810. [Google Scholar] [CrossRef]
- Wang, E.K.; Liu, H.; Wang, G.; Ye, Y.; Wu, T.-Y.; Chen, C.-M. Context recognition for adaptive hearing-aids. In Proceedings of the 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, UK, 22–24 July 2015; pp. 1102–1107. [Google Scholar]
- Vivek, V.S.; Vidhya, S.; Madhanmohan, P. Acoustic scene classification in hearing aid using deep learning. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020; pp. 695–699. [Google Scholar]
- Nossier, S.A.; Rizk, M.R.M.; Moussa, N.D.; El Shehaby, S. Enhanced smart hearing aid using deep neural networks. Alex. Eng. J. 2019, 58, 539–550. [Google Scholar] [CrossRef]
- Ghosh, R.; Shekar, R.C.C.; Hansen, J.H.L. Portable Smart-Space Research Interface to Predetermine Environment Acoustics for Cochlear implant and Hearing aid users with CCi-MOBILE. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–4 July 2020; pp. 4221–4224. [Google Scholar]
- Sun, J.; Liu, X.; Mei, X.; Zhao, J.; Plumbley, M.D.; Kılıç, V.; Wang, W. Deep Neural Decision Forest for Acoustic Scene Classification. arXiv 2022, arXiv:2203.03436. [Google Scholar]
- Hajihashemi, V.; Gharahbagh, A.A.; Cruz, P.M.; Ferreira, M.C.; Machado, J.J.M.; Tavares, J.M.R.S. Binaural Acoustic Scene Classification Using Wavelet Scattering, Parallel Ensemble Classifiers and Nonlinear Fusion. Sensors 2022, 22, 1535. [Google Scholar] [CrossRef]
- Liu, Y.; Neophytou, A.; Sengupta, S.; Sommerlade, E. Cross-modal spectrum transformation network for acoustic scene classification. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 830–834. [Google Scholar]
- Ren, Z.; Qian, K.; Zhang, Z.; Pandit, V.; Baird, A.; Schuller, B. Deep scalogram representations for acoustic scene classification. IEEE/CAA J. Autom. Sin. 2018, 5, 662–669. [Google Scholar] [CrossRef]
- Singh, A. 1-D CNN based Acoustic Scene Classification via Reducing Layer-wise Dimensionality. arXiv 2022, arXiv:2204.00555. [Google Scholar]
- Mesaros, A.; Heittola, T.; Virtanen, T. TUT database for acoustic scene classification and sound event detection. In Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 1128–1132. [Google Scholar]
- Siddiqui, H.U.R.; Saleem, A.A.; Brown, R.; Bademci, B.; Lee, E.; Rustam, F.; Dudley, S. Non-invasive driver drowsiness detection system. Sensors 2021, 21, 4833. [Google Scholar] [CrossRef] [PubMed]
- Cutler, A.; Cutler, D.R.; Stevens, J.R. Random forests. In Ensemble Machine Learning; Springer: Berlin/Heidelberg, Germany, 2012; pp. 157–175. [Google Scholar]
Classifier | Hyperparameters |
---|---|
RF | Random_state = 0, max_depth = 150, n_estimators = 1000 |
MLP | hidden_layer_sizes = (350, 300, 200, 100), activation = “relu”, random_state = 0, max_iter = 500 |
ETC | n_estimators = 100, max_depth = 200, random_state = 0 |
KNN | Algorithm = “auto”, leaf_size = 5, metric = “minkowski”, metric_params = None, n_jobs = 1, n_neighbors = 3 |
Classifier | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|
RF | 99.7% | 1.00 | 1.00 | 1.00 |
MLP | 96.2% | 0.96 | 0.96 | 0.96 |
ETC | 95.8% | 0.96 | 0.95 | 0.95 |
KNN | 89.5% | 0.90 | 0.89 | 0.89 |
Precision | Recall | F1-Score |
---|---|---|
1.00 | 1.00 | 1.00 |
1.00 | 1.00 | 1.00 |
1.00 | 1.00 | 1.00 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Siddiqui, H.U.R.; Saleem, A.A.; Raza, M.A.; Zafar, K.; Russo, R.; Dudley, S. Automatic User Preferences Selection of Smart Hearing Aid Using BioAid. Sensors 2022, 22, 8031. https://doi.org/10.3390/s22208031
Siddiqui HUR, Saleem AA, Raza MA, Zafar K, Russo R, Dudley S. Automatic User Preferences Selection of Smart Hearing Aid Using BioAid. Sensors. 2022; 22(20):8031. https://doi.org/10.3390/s22208031
Chicago/Turabian StyleSiddiqui, Hafeez Ur Rehman, Adil Ali Saleem, Muhammad Amjad Raza, Kainat Zafar, Riccardo Russo, and Sandra Dudley. 2022. "Automatic User Preferences Selection of Smart Hearing Aid Using BioAid" Sensors 22, no. 20: 8031. https://doi.org/10.3390/s22208031
APA StyleSiddiqui, H. U. R., Saleem, A. A., Raza, M. A., Zafar, K., Russo, R., & Dudley, S. (2022). Automatic User Preferences Selection of Smart Hearing Aid Using BioAid. Sensors, 22(20), 8031. https://doi.org/10.3390/s22208031