fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations
Abstract
:1. Introduction
1.1. The Influence of Hearing Loss and Auditory Noise on Development
1.2. The Ease of Language Understanding Model
1.3. Behavioral Speech-In-Noise Comprehension Assessments
1.4. Speech Comprehension and Virtual Acoustic Reality
1.5. Speech Comprehension and Functional Near-Infrared Spectroscopy
1.6. A Novel Approach to Elucidate SIN Comprehension: A VAE-fNIRS Application
2. Materials and Methods
2.1. Participants
2.2. Equipment and Virtual Acoustic Environment
2.3. Experimental Design and Procedure
2.4. Preprocessing
2.4.1. Behavioral Data
2.4.2. Neural Data
2.5. Analyses
3. Results
3.1. Behavioral Data
3.2. Neural Data
4. Discussion
Supplementary Materials
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Subramaniam, N.; Ramachandraiah, A. Speech intelligibility issues in classroom acoustics: A review. IE I J. Ar 2006, 87, 28–33. [Google Scholar]
- Yang, W.; Bradley, J.S. Effects of room acoustics on the intelligibility of speech in classrooms for young children. J. Acoust. Soc. Am. 2009, 125, 922–933. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Klatte, M.; Bergstrom, K.; Lachmann, T. Does noise affect learning? A short review on noise effects on cognitive performance in children. Front. Psychol. 2013, 4, 578. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Crandell, C.C.; Smaldino, J.J. Classroom Acoustics for Children With Normal Hearing and With Hearing Impairment. Lang. Speech Hear. Serv. Sch. 2000, 31, 362–370. [Google Scholar] [CrossRef] [Green Version]
- Stelmachowicz, P.G.; Pittman, A.L.; Hoover, B.M.; Lewis, D.E.; Moeller, M.P. The importance of high-frequency audibility in the speech and language development of children with hearing loss. Arch. Otolaryngol. 2004, 130, 556–562. [Google Scholar] [CrossRef] [Green Version]
- Tomblin, J.B.; Oleson, J.J.; Ambrose, S.E.; Walker, E.; Moeller, M.P. The influence of hearing aids on the speech and language development of children with hearing loss. JAMA Otolaryngol. Head Neck Surg. 2014, 140, 403–409. [Google Scholar] [CrossRef]
- Moeller, M.P.; Tomblin, J.B.; Yoshinaga-Itano, C.; Connor, C.M.; Jerger, S. Current state of knowledge: Language and literacy of children with hearing impairment. Ear Hear. 2007, 28, 740–753. [Google Scholar] [CrossRef] [Green Version]
- Delage, H.; Tuller, L. Language development and mild-to-moderate hearing loss: Does language normalize with age? J. Speech Lang. Hear. Res. 2007, 50, 1300–1313. [Google Scholar] [CrossRef]
- Ching, T.Y.; Dillon, H.; Katsch, R.; Byrne, D. Maximizing effective audibility in hearing aid fitting. Ear Hear. 2001, 22, 212–224. [Google Scholar] [CrossRef]
- Glista, D.; Scollie, S.; Sulkers, J. Perceptual acclimatization post nonlinear frequency compression hearing aid fitting in older children. J. Speech Lang. Hear. Res. 2012, 55, 1765–1787. [Google Scholar] [CrossRef]
- Ihlefeld, A.; Shinn-Cunningham, B.G. Effect of source spectrum on sound localization in an everyday reverberant room. J. Acoust. Soc. Am. 2011, 130, 324–333. [Google Scholar] [CrossRef]
- Kidd, G.; Mason, C.R.; Brughera, A.; Hartmann, W.M. The role of reverberation in release from masking due to spatial separation of sources for speech identification. Acta Acust. United Acust. 2005, 91, 526–536. [Google Scholar]
- Rudner, M.; Lyberg-Ahlander, V.; Brannstrom, J.; Nirme, J.; Pichora-Fuller, M.K.; Sahlen, B. Listening Comprehension and Listening Effort in the Primary School Classroom. Front. Psychol. 2018, 9, 1193. [Google Scholar] [CrossRef] [PubMed]
- Van Deun, L.; van Wieringen, A.; Wouters, J. Spatial speech perception benefits in young children with normal hearing and cochlear implants. Ear Hear. 2010, 31, 702–713. [Google Scholar] [CrossRef]
- Cameron, S.; Dillon, H.; Newall, P. The listening in Spatialized Noise test: Normative data for children. Int. J. Audiol. 2006, 45, 99–108. [Google Scholar] [CrossRef] [PubMed]
- Brown, A.D.; Rodriguez, F.A.; Portnuff, C.D.; Goupell, M.J.; Tollin, D.J. Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids: Effects of Nonlinear Frequency Compression. Trends Hear. 2016, 20. [Google Scholar] [CrossRef]
- Ching, T.Y.C.; van Wanrooy, E.; Dillon, H.; Carter, L. Spatial release from masking in normal-hearing children and children who use hearing aids. J. Acoust. Soc. Am. 2011, 129, 368–375. [Google Scholar] [CrossRef] [Green Version]
- Ching, T.Y.C.; Zhang, V.W.; Flynn, C.; Burns, L.; Button, L.; Hou, S.N.; McGhie, K.; Van Buynder, P. Factors influencing speech perception in noise for 5-year-old children using hearing aids or cochlear implants. Int. J. Audiol. 2018, 57, S70–S80. [Google Scholar] [CrossRef]
- Rönnberg, J.; Lunner, T.; Zekveld, A.; Sörqvist, P.; Danielsson, H.; Lyxell, B.; Dahlström, Ö.; Signoret, C.; Stenfelt, S.; Pichora-Fuller, M.K. The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Front. Syst. Neurosci. 2013, 7, 31. [Google Scholar] [CrossRef] [Green Version]
- Rönnberg, J.; Holmer, E.; Rudner, M. Cognitive hearing science and ease of language understanding. Int. J. Audiol. 2019, 58, 247–261. [Google Scholar] [CrossRef]
- Holmer, E.; Heimann, M.; Rudner, M. Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model. Front. Psychol. 2016, 7. [Google Scholar] [CrossRef] [Green Version]
- Rudner, M.; Holmer, E. Working Memory in Deaf Children Is Explained by the Developmental Ease of Language Understanding (D-ELU) Model. Front. Psychol. 2016, 7, 1047. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Signoret, C.; Rudner, M. Hearing Impairment and Perceived Clarity of Predictable Speech. Ear Hear. 2019, 40, 1140–1148. [Google Scholar] [CrossRef] [Green Version]
- McCreery, R.W.; Walker, E.; Spratford, M.; Lewis, D.; Brennan, M. Auditory, cognitive, and linguistic factors predict speech recognition in adverse listening conditions for children with hearing loss. Front. Neurosci. 2019, 13, 1093. [Google Scholar] [CrossRef] [Green Version]
- Nilsson, M.; Soli, S.D.; Sullivan, J.A. Development of the Hearing in Noise Test for the Measurement of Speech Reception Thresholds in Quiet and in Noise. J. Acoust. Soc. Am. 1994, 95, 1085–1099. [Google Scholar] [CrossRef]
- Wilson, R.H. Development of a speech-in-multitalker-babble paradigm to assess word-recognition performance. J. Am. Acad. Audiol. 2003, 14, 453–470. [Google Scholar] [CrossRef]
- Cameron, S.; Dillon, H. Development of the Listening in Spatialized Noise-Sentences Test (LISN-S). Ear Hear. 2007, 28, 196–211. [Google Scholar] [CrossRef] [PubMed]
- Kollmeier, B.; Wesselkamp, M. Development and evaluation of a German sentence test for objective and subjective speech intelligibility assessment. J. Acoust. Soc. Am. 1997, 102, 2412–2421. [Google Scholar] [CrossRef]
- Wagener, K.; Brand, T.; Kollmeier, B. Entwicklung und Evaluation eines Satztests für die deutsche Sprache. I–III: Design, Optimierung und Evaluation des Oldenburger Satztests (Development and evaluation of a sentence test for the German language. I–III: Design, optimization and evaluation of the Oldenburg sentence test). Z. Für Audiol. Audiol. Acoust. 1999, 38, 4–15. [Google Scholar]
- Döring, W.H.; Hamacher, V. Neue Sprachverständlichkeitstests in der Klinik: Aachener Logatomtest und „Dreinsilbertest “mit Störschall; Kollmeier, B., Ed.; Moderne Verfahren der Sprachaudiometrie: Heidelberg, Germany, 1992; pp. 137–168. [Google Scholar]
- Wagener, K.; Kollmeier, B. Evaluation des Oldenburger Satztests mit Kindern und Oldenburger Kinder-Satztest. Z. Audiol. 2005, 44, 134–143. [Google Scholar]
- Vickers, D.; Degun, A.; Canas, A.; Stainsby, T.; Vanpoucke, F. Deactivating Cochlear Implant Electrodes Based on Pitch Information for Users of the ACE Strategy. Adv. Exp. Med. Biol. 2016, 894, 115–123. [Google Scholar] [CrossRef] [Green Version]
- Bronkhorst, A.W. Localization of real and virtual sound sources. J. Acoust. Soc. Am. 1995, 98, 2542–2553. [Google Scholar] [CrossRef]
- Wenzel, E.M.; Arruda, M.; Kistler, D.J.; Wightman, F.L. Localization using nonindividualized head-related transfer functions. J. Acoust. Soc. Am. 1993, 94, 111–123. [Google Scholar] [CrossRef]
- Denk, F.; Ewert, S.D.; Kollmeier, B. On the limitations of sound localization with hearing devices. J. Acoust. Soc. Am. 2019, 146, 1732–1744. [Google Scholar] [CrossRef]
- Pausch, F.; Fels, J. Localization Performance in a Binaural Real-Time Auralization System Extended to Research Hearing Aids. Trends Hear. 2020, 24. [Google Scholar] [CrossRef]
- Best, V.; Kalluri, S.; McLachlan, S.; Valentine, S.; Edwards, B.; Carlile, S. A comparison of CIC and BTE hearing aids for three-dimensional localization of speech. Int. J. Audiol. 2010, 49, 723–732. [Google Scholar] [CrossRef]
- Van den Bogaert, T.; Carette, E.; Wouters, J. Sound source localization using hearing aids with microphones placed behind-the-ear, in-the-canal, and in-the-pinna. Int. J. Audiol. 2011, 50, 164–176. [Google Scholar] [CrossRef]
- Johnstone, P.M.; Nabelek, A.K.; Robertson, V.S. Sound Localization Acuity in Children with Unilateral Hearing Loss Who Wear a Hearing Aid in the Impaired Ear. J. Am. Acad. Audiol. 2010, 21, 522–534. [Google Scholar] [CrossRef]
- Kolarik, A.J.; Cirstea, S.; Pardhan, S. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues. Exp. Brain Res. 2013, 224, 623–633. [Google Scholar] [CrossRef]
- Kolarik, A.J.; Pardhan, S.; Cirstea, S.; Moore, B.C. Auditory spatial representations of the world are compressed in blind humans. Exp. Brain Res. 2017, 235, 597–606. [Google Scholar] [CrossRef] [Green Version]
- Shinn-Cunningham, B.G. Distance cues for virtual auditory space. In Proceedings of the IEEE-PCM, Sydney, Australia, 13–15 December 2000; pp. 227–230. [Google Scholar]
- Zahorik, P. Assessing auditory distance perception using virtual acoustics. J. Acoust. Soc. Am. 2002, 111, 1832–1846. [Google Scholar] [CrossRef]
- Courtois, G.; Grimaldi, V.; Lissek, H.; Estoppey, P.; Georganti, E. Perception of Auditory Distance in Normal-Hearing and Moderate-to-Profound Hearing-Impaired Listeners. Trends Hear. 2019, 23. [Google Scholar] [CrossRef]
- Oberem, J.; Lawo, V.; Koch, I.; Fels, J. Intentional Switching in Auditory Selective Attention: Exploring Different Binaural Reproduction Methods in an Anechoic Chamber. Acta Acust. United Acust. 2014, 100, 1139–1148. [Google Scholar] [CrossRef]
- Oberem, J.; Seibold, J.; Koch, I.; Fels, J. Intentional switching in auditory selective attention: Exploring attention shifts with different reverberation times. Hear. Res. 2018, 359, 32–39. [Google Scholar] [CrossRef]
- MacCutcheon, D.; Hurtig, A.; Pausch, F.; Hygge, S.; Fels, J.; Ljung, R. Second language vocabulary level is related to benefits for second language listening comprehension under lower reverberation time conditions. J. Cogn. Psychol. 2019, 31, 175–185. [Google Scholar] [CrossRef]
- Peng, Z.E.; Wang, L.M. Listening Effort by Native and Nonnative Listeners Due to Noise, Reverberation, and Talker Foreign Accent During English Speech Perception. J. Speech Lang. Hear. Res. 2019, 62, 1068–1081. [Google Scholar] [CrossRef] [PubMed]
- Peng, Z.E.; Wang, L.M. Effects of noise, reverberation and foreign accent on native and non-native listeners’ performance of English speech comprehension. J. Acoust. Soc. Am. 2016, 139, 2772–2783. [Google Scholar] [CrossRef] [Green Version]
- Helms Tillery, K.; Brown, C.A.; Bacon, S.P. Comparing the effects of reverberation and of noise on speech recognition in simulated electric-acoustic listening. J. Acoust. Soc. Am. 2012, 131, 416–423. [Google Scholar] [CrossRef] [Green Version]
- MacCutcheon, D.; Pausch, F.; Fels, J.; Ljung, R. The effect of language, spatial factors, masker type and memory span on speech-in-noise thresholds in sequential bilingual children. Scand. J. Psychol. 2018, 59, 567–577. [Google Scholar] [CrossRef]
- MacCutcheon, D.; Pausch, F.; Fullgrabe, C.; Eccles, R.; van der Linde, J.; Panebianco, C.; Fels, J.; Ljung, R. The Contribution of Individual Differences in Memory Span and Language Ability to Spatial Release From Masking in Young Children. J. Speech Lang. Hear. Res. 2019, 62, 3741–3751. [Google Scholar] [CrossRef] [Green Version]
- Ricketts, T.A.; Picou, E.M.; Shehorn, J.; Dittberner, A.B. Degree of Hearing Loss Affects Bilateral Hearing Aid Benefits in Ecologically Relevant Laboratory Conditions. J. Speech Lang. Hear. Res. 2019, 3834–3850. [Google Scholar] [CrossRef]
- Defenderfer, J.; Kerr-German, A.; Hedrick, M.; Buss, A.T. Investigating the role of temporal lobe activation in speech perception accuracy with normal hearing adults: An event-related fNIRS study. Neuropsychologia 2017, 106, 31–41. [Google Scholar] [CrossRef]
- Wijayasiri, P.; Hartley, D.E.H.; Wiggins, I.M. Brain activity underlying the recovery of meaning from degraded speech: A functional near-infrared spectroscopy (fNIRS) study. Hear. Res. 2017, 351, 55–67. [Google Scholar] [CrossRef]
- Zhang, M.; Ying, Y.L.M.; Ihlefeld, A. Spatial Release From Informational Masking: Evidence From Functional Near Infrared Spectroscopy. Trends Hear. 2018, 22. [Google Scholar] [CrossRef]
- Olds, C.; Pollonini, L.; Abaya, H.; Larky, J.; Loy, M.; Bortfeld, H.; Beauchamp, M.S.; Oghalai, J.S. Cortical Activation Patterns Correlate with Speech Understanding After Cochlear Implantation. Ear Hear. 2016, 37, e160–e172. [Google Scholar] [CrossRef] [Green Version]
- Rowland, S.C.; Hartley, D.E.H.; Wiggins, I.M. Listening in Naturalistic Scenes: What Can Functional Near-Infrared Spectroscopy and Intersubject Correlation Analysis Tell Us About the Underlying Brain Activity? Trends Hear. 2018, 22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Anderson, C.A.; Wiggins, I.M.; Kitterick, P.T.; Hartley, D.E.H. Pre-operative Brain Imaging Using Functional Near-Infrared Spectroscopy Helps Predict Cochlear Implant Outcome in Deaf Adults. JARO J. Assoc. Res. Otolaryngol. 2019, 20, 511–528. [Google Scholar] [CrossRef] [Green Version]
- Mushtaq, F.; Wiggins, I.M.; Kitterick, P.T.; Anderson, C.A.; Hartley, D.E.H. Evaluating time-reversed speech and signal-correlated noise as auditory baselines for isolating speech-specific processing using fNIRS. PLoS ONE 2019, 14. [Google Scholar] [CrossRef] [Green Version]
- Mushtaq, F.; Wiggins, I.M.; Kitterick, P.T.; Anderson, C.A.; Hartley, D.E.H. The Benefit of Cross-Modal Reorganization on Speech Perception in Pediatric Cochlear Implant Recipients Revealed Using Functional Near-Infrared Spectroscopy. Front. Hum. Neurosci. 2020, 14, 308. [Google Scholar] [CrossRef]
- Puschmann, S.; Daeglau, M.; Stropahl, M.; Mirkovic, B.; Rosemann, S.; Thiel, C.M.; Debener, S. Hearing-impaired listeners show increased audiovisual benefit when listening to speech in noise. Neuroimage 2019, 196, 261–268. [Google Scholar] [CrossRef]
- Marsella, P.; Scorpecci, A.; Cartocci, G.; Giannantonio, S.; Maglione, A.G.; Venuti, I.; Brizi, A.; Babiloni, F. EEG activity as an objective measure of cognitive load during effortful listening: A study on pediatric subjects with bilateral, asymmetric sensorineural hearing loss. Int. J. Pediatr. Otorhi. 2017, 99, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Telkemeyer, S.; Rossi, S.; Nierhaus, T.; Steinbrink, J.; Obrig, H.; Wartenburger, I. Acoustic processing of temporally modulated sounds in infants: Evidence from a combined near-infrared spectroscopy and EEG study. Front. Psychol. 2011, 2. [Google Scholar] [CrossRef] [Green Version]
- Dai, B.H.; Chen, C.S.; Long, Y.H.; Zheng, L.F.; Zhao, H.; Bai, X.L.; Liu, W.D.; Zhang, Y.X.; Liu, L.; Guo, T.M.; et al. Neural mechanisms for selectively tuning in to the target speaker in a naturalistic noisy situation. Nat. Commun. 2018, 9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jiang, J.; Dai, B.H.; Peng, D.L.; Zhu, C.Z.; Liu, L.; Lu, C.M. Neural Synchronization during Face-to-Face Communication. J. Neurosci. 2012, 32, 16064–16069. [Google Scholar] [CrossRef]
- Zion Golumbic, E.M.; Ding, N.; Bickel, S.; Lakatos, P.; Schevon, C.A.; McKhann, G.M.; Goodman, R.R.; Emerson, R.; Mehta, A.D.; Simon, J.Z.; et al. Mechanisms underlying selective neuronal tracking of attended speech at a “cocktail party”. Neuron 2013, 77, 980–991. [Google Scholar] [CrossRef] [Green Version]
- Puschmann, S.; Steinkamp, S.; Gillich, I.; Mirkovic, B.; Debener, S.; Thiel, C.M. The Right Temporoparietal Junction Supports Speech Tracking During Selective Listening: Evidence from Concurrent EEG-fMRI. J. Neurosci. 2017, 37, 11505–11516. [Google Scholar] [CrossRef] [Green Version]
- Wong, P.C.M.; Jin, J.X.M.; Gunasekera, G.M.; Abel, R.; Lee, E.R.; Dhar, S. Aging and cortical mechanisms of speech perception in noise. Neuropsychologia 2009, 47, 693–703. [Google Scholar] [CrossRef] [Green Version]
- Soli, S.D.; Wong, L.L.N. Assessment of speech intelligibility in noise with the Hearing in Noise Test. Int. J. Audiol. 2008, 47, 356–361. [Google Scholar] [CrossRef]
- Torkildsen, J.V.K.; Hitchins, A.; Myhrum, M.; Wie, O.B. Speech-in-Noise Perception in Children With Cochlear Implants, Hearing Aids, Developmental Language Disorder and Typical Development: The Effects of Linguistic and Cognitive Abilities. Front. Psychol. 2019, 10, 2530. [Google Scholar] [CrossRef]
- Grimm, G.; Luberadzka, J.; Hohmann, V. Virtual acoustic environments for comprehensive evaluation of model-based hearing devices. Int. J. Audiol. 2018, 57, S112–S117. [Google Scholar] [CrossRef]
- Aspöck, L.; Vorländer, M. Room geometry acquisition and processing methods for geometrical acoustics simulation models. In Proceedings of the EuroRegio 2016, Porto, Portugal, 13–15 June 2016. [Google Scholar]
- Ahrens, A.; Marschall, M.; Dau, T. Measuring and modeling speech intelligibility in real and loudspeaker-based virtual sound environments. Hear. Res. 2019, 377, 307–317. [Google Scholar] [CrossRef]
- Pelzer, S.; Aspöck, L.; Schröder, D.; Vorländer, M. Interactive real-time simulation and auralization for modifiable rooms. Build. Acoust. 2014, 21, 65–73. [Google Scholar] [CrossRef]
- Grimm, G.; Luberadzka, J.; Hohmann, V. A toolbox for rendering virtual acoustic environments in the context of audiology. Acta Acust. United Acust. 2019, 105, 566–578. [Google Scholar] [CrossRef]
- Dewey, R.S.; Hartley, D.E. Cortical cross-modal plasticity following deafness measured using functional near-infrared spectroscopy. Hear. Res. 2015, 325, 55–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sevy, A.B.; Bortfeld, H.; Huppert, T.J.; Beauchamp, M.S.; Tonini, R.E.; Oghalai, J.S. Neuroimaging with near-infrared spectroscopy demonstrates speech-evoked activity in the auditory cortex of deaf children following cochlear implantation. Hear. Res. 2010, 270, 39–47. [Google Scholar] [CrossRef] [Green Version]
- Lawler, C.A.; Wiggins, I.M.; Dewey, R.S.; Hartley, D.E. The use of functional near-infrared spectroscopy for measuring cortical reorganisation in cochlear implant users: A possible predictor of variable speech outcomes? Cochlear Implant. Int. 2015, 16 (Suppl. 1), S30–S32. [Google Scholar] [CrossRef]
- van de Rijt, L.P.; van Opstal, A.J.; Mylanus, E.A.; Straatman, L.V.; Hu, H.Y.; Snik, A.F.; van Wanrooij, M.M. Temporal Cortex Activation to Audiovisual Speech in Normal-Hearing and Cochlear Implant Users Measured with Functional Near-Infrared Spectroscopy. Front. Hum. Neurosci. 2016, 10, 48. [Google Scholar] [CrossRef] [Green Version]
- Zhou, X.; Seghouane, A.-K.; Shah, A.; Innes-Brown, H.; Cross, W.; Litovsky, R.; McKay, C. Cortical Speech Processing in Postlingually Deaf Adult Cochlear Implant Users, as Revealed by Functional Near-Infrared Spectroscopy. Trends Hear. 2018. [Google Scholar] [CrossRef] [PubMed]
- Anderson, C.A.; Wiggins, I.M.; Kitterick, P.T.; Hartley, D.E.H. Adaptive benefit of cross-modal plasticity following cochlear implantation in deaf adults. Proc. Natl. Acad. Sci. USA 2017, 114, 10256–10261. [Google Scholar] [CrossRef] [Green Version]
- Quaresima, V.; Bisconti, S.; Ferrari, M. A brief review on the use of functional near-infrared spectroscopy (fNIRS) for language imaging studies in human newborns and adults. Brain Lang. 2012, 121, 79–89. [Google Scholar] [CrossRef]
- Bell, L.; Scharke, W.; Reindl, V.; Fels, J.; Neuschaefer-Rube, C.; Konrad, K. Auditory and Visual Response Inhibition in Children with Bilateral Hearing Aids and Children with ADHD. Brain Sci. 2020, 10, 307. [Google Scholar] [CrossRef]
- Minagawa-Kawai, Y.; Naoi, N.; Kojima, S. Fundamentals of the NIRS System. In New Approach to Functional Neuroimaging: Near Infrared Spectroscopy; Keio University Press: Tokyo, Japan, 2009. [Google Scholar]
- Lawrence, R.J.; Wiggins, I.M.; Anderson, C.A.; Davies-Thompson, J.; Hartley, D.E.H. Cortical correlates of speech intelligibility measured using functional near-infrared spectroscopy (fNIRS). Hear. Res. 2018, 370, 53–64. [Google Scholar] [CrossRef] [PubMed]
- Pollonini, L.; Olds, C.; Abaya, H.; Bortfeld, H.; Beauchamp, M.S.; Oghalai, J.S. Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy. Hear. Res. 2014, 309, 84–93. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wiggins, I.M.; Wijayasiri, P.; Hartley, D.E.H. Shining a light on the neural signature of effortful listening. J. Acoust. Soc. Am. 2016, 139, 2074. [Google Scholar] [CrossRef]
- Masiero, B.S. Individualized Binaural Technology: Measurement, Equalization and Perceptual Evaluation; Logos Verlag Berlin GmbH: Berlin, Germany, 2012; Volume 13. [Google Scholar]
- Pausch, F.; Aspock, L.; Vorlander, M.; Fels, J. An Extended Binaural Real-Time Auralization System With an Interface to Research Hearing Aids for Experiments on Subjects With Hearing Loss. Trends Hear. 2018, 22. [Google Scholar] [CrossRef] [Green Version]
- Schröder, D. Physically Based Real-Time Auralization of Interactive Virtual Environments; Logos Verlag Berlin GmbH: Berlin, Germany, 2011; Volume 11. [Google Scholar]
- Fels, J.; Buthmann, P.; Vorlander, M. Head-related transfer functions of children. Acta Acust. United Acust. 2004, 90, 918–927. [Google Scholar]
- Fels, J.; Vorlander, M. Anthropometric Parameters Influencing Head-Related Transfer Functions. Acta Acust. United Acust. 2009, 95, 331–342. [Google Scholar] [CrossRef]
- Bomhardt, R.; Fels, J. Analytical interaural time difference model for the individualization of arbitrary Head-Related Impulse Responses. In Proceedings of the Audio Engineering Society Convention 137, Los Angeles, CA, USA, 9–12 October 2014. [Google Scholar]
- Middlebrooks, J.C. Individual differences in external-ear transfer functions reduced by scaling in frequency. J. Acoust. Soc. Am. 1999, 106, 1480–1492. [Google Scholar] [CrossRef]
- Schmitz, A. Ein neues digitales Kunstkopfmeßsystem. Acta Acust. United Acust. 1995, 81, 416–420. [Google Scholar]
- Stone, M.A.; Moore, B.C.J.; Meisenbacher, K.; Derleth, R.P. Tolerable hearing aid delays. V. Estimation of limits for open canal fittings. Ear Hear. 2008, 29, 601–617. [Google Scholar] [CrossRef]
- Toolbox, Global Optimization. User’s Guide (R2019a); The MathWorks Inc.: Natick, MA, USA, 2019. [Google Scholar]
- Grimm, G.; Herzke, T.; Berg, D.; Hohmann, V. The master hearing aid: A PC-based platform for algorithm development and evaluation. Acta Acust. United Acust. 2006, 92, 618–628. [Google Scholar]
- Keidser, G.; Dillon, H.; Flax, M.; Ching, T.; Brewer, S. The NAL-NL2 prescription procedure. Audiol. Res. 2011, 1. [Google Scholar] [CrossRef] [Green Version]
- Jasper, H. The 10/20 international electrode system. EEG Clin. Neurophysiol. 1958, 10, 370–375. [Google Scholar]
- Tsuzuki, D.; Jurcak, V.; Singh, A.K.; Okamoto, M.; Watanabe, E.; Dan, I. Virtual spatial registration of stand-alone fNIRS data to MNI space. Neuroimage 2007, 34, 1506–1518. [Google Scholar] [CrossRef] [PubMed]
- Jichi Medical University. Available online: http://www.jichi.ac.jp/brainlab/virtual_registration/Result3x5_E.html (accessed on 24 September 2018).
- Gagnon, L.; Yucel, M.A.; Dehaes, M.; Cooper, R.J.; Perdue, K.L.; Selb, J.; Huppert, T.J.; Hoge, R.D.; Boas, D.A. Quantification of the cortical contribution to the NIRS signal over the motor cortex using concurrent NIRS-fMRI measurements. Neuroimage 2012, 59, 3933–3940. [Google Scholar] [CrossRef] [Green Version]
- Peng, Z.E.; Pausch, F.; Fels, J. Auditory training of spatial processing in children with hearing loss in virtual acoustic environments: Pretest results. In Proceedings of the DAGA 2016—42. Jahrestagung für Akustik (Deutsche Gesellschaft für Akustik, Aachen, Germany, 14–17 March 2016. [Google Scholar]
- Hochmair-Desoyer, I.; Schulz, E.; Moser, L.; Schmidt, M. The HSM sentence test as a tool for evaluating the speech understanding in noise of cochlear implant users. Am. J. Otolaryngol. 1997, 18, S83. [Google Scholar]
- Levitt, H. Transformed up-down methods in psychoacoustics. J. Acoust. Soc. Am. 1971, 49, 467–477. [Google Scholar] [CrossRef]
- Fruend, I.; Haenel, N.V.; Wichmann, F.A. Inference for psychometric functions in the presence of nonstationary behavior. J. Vis. 2011, 11. [Google Scholar] [CrossRef]
- Buss, E.; Hall, J.W.; Grose, J.H.; Dev, M.B. A comparison of threshold estimation methods in children 6–11 years of age. J. Acoust. Soc. Am. 2001, 109, 727–731. [Google Scholar] [CrossRef]
- Schutt, H.H.; Harmeling, S.; Macke, J.H.; Wichmann, F.A. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data. Vis. Res. 2016, 122, 105–123. [Google Scholar] [CrossRef] [Green Version]
- Huppert, T.J.; Diamond, S.G.; Franceschini, M.A.; Boas, D.A. HomER: A review of time-series analysis methods for near-infrared spectroscopy of the brain. Appl. Opt. 2009, 48, D280–D298. [Google Scholar] [CrossRef] [Green Version]
- Tak, S.; Uga, M.; Flandin, G.; Dan, I.; Penny, W.D. Sensor space group analysis for fNIRS data. J. Neurosci. Methods 2016, 264, 103–112. [Google Scholar] [CrossRef]
- Jahani, S.; Setarehdan, S.K.; Boas, D.A.; Yucel, M.A. Motion artifact detection and correction in functional near-infrared spectroscopy: A new hybrid method based on spline interpolation method and Savitzky-Golay filtering. Neurophotonics 2018, 5. [Google Scholar] [CrossRef] [Green Version]
- Di Lorenzo, R.; Pirazzoli, L.; Blasi, A.; Bulgarelli, C.; Hakuno, Y.; Minagawa, Y.; Brigadoi, S. Recommendations for motion correction of infant fNIRS data applicable to multiple data sets and acquisition systems. Neuroimage 2019, 200, 511–527. [Google Scholar] [CrossRef]
- IBM Corp. IBM SPSS Statistics for Windows, Version 23.0; IBM Corp.: Armonk, NY, USA, 2015. [Google Scholar]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2019. [Google Scholar]
- Crawford, J.R.; Garthwaite, P.H.; Porter, S. Point and interval estimates of effect sizes for the case-controls design in neuropsychology: Rationale, methods, implementations, and proposed reporting standards. Cogn. Neuropsychol. 2010, 27, 245–260. [Google Scholar] [CrossRef] [PubMed]
- Crawford, J.R.; Garthwaite, P.H. Investigation of the single case in neuropsychology: Confidence limits on the abnormality of test scores and test score differences. Neuropsychologia 2002, 40, 1196–1208. [Google Scholar] [CrossRef] [Green Version]
- Crawford, J.R.; Howell, D.C. Comparing an individual’s test score against norms derived from small samples. Clin. Neuropsychol. 1998, 12, 482–486. [Google Scholar] [CrossRef]
- Klatte, M.; Hellbruck, J.; Seidel, J.; Leistner, P. Effects of Classroom Acoustics on Performance and Well-Being in Elementary School Children: A Field Study. Environ. Behav. 2010, 42, 659–692. [Google Scholar] [CrossRef] [Green Version]
- Klatte, M.; Lachmann, T.; Meis, M. Effects of noise and reverberation on speech perception and listening comprehension of children and adults in a classroom-like setting. Noise Health 2010, 12, 270–282. [Google Scholar] [CrossRef] [PubMed]
- Garcia, D.P.; Rasmussen, B.; Brunskog, J. Classroom acoustics design for speakers’ comfort and speech intelligibility: A European perspective. In Proceedings of the 7th Forum Acusticum, Krakow, Poland, 7–12 September 2014. [Google Scholar]
- Dimitrijevic, A.; Smith, M.L.; Kadis, D.S.; Moore, D.R. Neural indices of listening effort in noisy environments. Sci. Rep. 2019, 9, 11278. [Google Scholar] [CrossRef] [Green Version]
- Bonna, K.; Finc, K.; Zimmermann, M.; Bola, L.; Mostowski, P.; Szul, M.; Rutkowski, P.; Duch, W.; Marchewka, A.; Jednoróg, K. Early deafness leads to re-shaping of global functional connectivity beyond the auditory cortex. arXiv Prepr. 2019, arXiv:1903.11915. [Google Scholar]
- Bell, L.; Wagels, L.; Neuschaefer-Rube, C.; Fels, J.; Gur, R.E.; Konrad, K. The Cross-Modal Effects of Sensory Deprivation on Spatial and Temporal Processes in Vision and Audition: A Systematic Review on Behavioral and Neuroimaging Research since 2000. Neural Plast. 2019, 2019, 21. [Google Scholar] [CrossRef] [Green Version]
- Minagawa, Y.; Xu, M.; Morimoto, S. Toward Interactive Social Neuroscience: Neuroimaging Real-World Interactions in Various Populations. Jpn. Psychol. Res. 2018, 60, 196–224. [Google Scholar] [CrossRef]
- Ahrens, A.; Lund, K.D.; Marschall, M.; Dau, T. Sound source localization with varying amount of visual information in virtual reality. PLoS ONE 2019, 14, e0214603. [Google Scholar] [CrossRef] [Green Version]
- Nirme, J.; Sahlén, B.; Åhlander, V.L.; Brännström, J.; Haake, M. Audio-visual speech comprehension in noise with real and virtual speakers. Speech Commun. 2020, 116, 44–55. [Google Scholar] [CrossRef]
- Lalonde, K.; McCreery, R.W. Audiovisual Enhancement of Speech Perception in Noise by School-Age Children Who Are Hard of Hearing. Ear Hear. 2020, 41, 705–719. [Google Scholar] [CrossRef]
- van de Rijt, L.P.H.; van Wanrooij, M.M.; Snik, A.F.M.; Mylanus, E.A.M.; van Opstal, A.J.; Roye, A. Measuring Cortical Activity During Auditory Processing with Functional Near-Infrared Spectroscopy. J. Hear. Sci. 2018, 8, 9–18. [Google Scholar] [CrossRef] [PubMed]
- Weder, S.; Shoushtarian, M.; Olivares, V.; Zhou, X.; Innes-Brown, H.; McKay, C. Cortical fNIRS Responses Can Be Better Explained by Loudness Percept than Sound Intensity. Ear Hear. 2020, 41, 1187–1195. [Google Scholar] [CrossRef]
- Chen, L.C.; Sandmann, P.; Thorne, J.; Herrmann, C.; Debener, S. Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli. Brain Topogr. 2015, 28, 710–725. [Google Scholar] [CrossRef] [PubMed]
- Scholkmann, F.; Kleiser, S.; Metz, A.J.; Zimmermann, R.; Mata Pavia, J.; Wolf, U.; Wolf, M. A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology. Neuroimage 2014, 85 Pt 1, 6–27. [Google Scholar] [CrossRef]
- Tachtsidis, I.; Scholkmann, F. False positives and false negatives in functional near-infrared spectroscopy: Issues, challenges, and the way forward. Neurophotonics 2016, 3. [Google Scholar] [CrossRef] [Green Version]
- Scholkmann, F.; Wolf, M. General equation for the differential pathlength factor of the frontal human head depending on wavelength and age. J. Biomed. Opt. 2013, 18, 105004. [Google Scholar] [CrossRef] [Green Version]
- Huppert, T.J.; Karim, H.; Lin, C.C.; Alqahtani, B.A.; Greenspan, S.L.; Sparto, P.J. Functional imaging of cognition in an old-old population: A case for portable functional near-infrared spectroscopy. PLoS ONE 2017, 12. [Google Scholar] [CrossRef] [PubMed]
- Mei, N.; Flinker, A.; Zhu, M.; Cai, Q.; Tian, X. Lateralization in the dichotic listening of tones is influenced by the content of speech. Neuropsychologia 2020, 140, 107389. [Google Scholar] [CrossRef]
- Peelle, J.E. The hemispheric lateralization of speech processing depends on what “speech” is: A hierarchical perspective. Front. Hum. Neurosci. 2012, 6, 309. [Google Scholar] [CrossRef] [Green Version]
- Minagawa-Kawai, Y.; Cristia, A.; Dupoux, E. Cerebral lateralization and early speech acquisition: A developmental scenario. Dev. Cogn. Neurosci. 2011, 1, 217–232. [Google Scholar] [CrossRef] [Green Version]
Outcome Measure | Population of Interest | Behavioral/Neuroimaging Method | Test/Example Studies | Overview | |
---|---|---|---|---|---|
A | Behavioral SIN Assessments | ||||
SIN recognition assessments with varying SNR and/or noise location | Adults with NH/HL | Behavioral | Hearing In Noise Test—HINT [25] | Headphone-based; recordings of 250 sentences by a male speaker that are intended to be utilized in adaptive SRT measurements in quiet or spectrally matched noise | |
Adults with NH/HL | Oldenburger Satztest (Oldenburger sentence test)—OlSa [28,29] | Headphone-based; recordings of sentences that consist of a random combination of 50 words that are used to measure the SRT in quiet and in noise | |||
Adults with NH/HL | Words-In-Noise test—WiN [26], for clinical use | Earphone-based; recordings of 70 words embedded in unique segments of multi-talker distractor noise that are intended to be utilized in adaptive SRT measurements | |||
Adults with NH/HL | Döring test [30], for clinical use | Loudspeaker-based; recordings are single syllables of the “Freiburger Sprachverständnistest” (Freiburger speech comprehension test) which are repeated three times in background noise (words of the “Freiburger Sprachverständnistest”); spatial location of noise and target are varied (spatially separated vs. co-located) | |||
Children with NH/HL | Listening in Spatialized Noise-Sentences test—LiSN-S [27] | Headphone-based; recordings of 120 sentences by a female speaker that are intended to be utilized in adaptive SRT measurements in background speech by two masking talkers (two female speakers that record two distractor stories) in four different conditions: maskers are either spatially co-located with target or at ±90° azimuth and either share the same pitch or different pitch than the target | |||
Children with NH/HL | “Oldenburger Kinder-Satztest” (Oldenburger sentence test for children)—OlKiSa [31] | Headphone-based; simplified version of the Oldenburger sentence test (OlSA); recordings of sentences that consist of a random combination of 21 words that are used to measure the SRT in quiet and in noise | |||
Children with NH/HL | Children’s Coordinate Response Measure—CCRM [32] | Headphone-based; recordings of sentences that are to be utilized in adaptive SRT measurements in either 20-talker babble or speech-shaped noise | |||
B | Listening in the Free Field or VAEs | ||||
Sound localization | Adults with NH | Behavioral | Bronkhorst [33], Wenzel et al. [34], Denk et al. [35], Pausch and Fels [36] | Investigations of auditory sound localization, distance perception, and attention switching using ear/headphones, research HAs, or loudspeaker-based reproduction of auditory stimuli with or without manipulation of acoustic variables including but not limited to reverberation, interaural level differences, and sound intensity | |
Adults with HL | Best et al. [37], van den Bogaert et al. [38] | ||||
Children with HL | Johnstone et al. [39] | ||||
Auditory distance perception | Blind and sighted adults with NH | Kolarik et al. [40], Kolarik et al. [41]; Shinn-Cunningham [42], Zahorik [43] | |||
Adults with NH/HL | Courtois et al. [44] | ||||
Auditory attention switching | Adults with NH | Oberem et al. [45], Oberem et al. [46] | |||
Auditory simulations of SIN tasks in simulated indoor environments | Adults with NH | Behavioral | MacCutcheon et al. [47], Peng and Wang [48,49], Helms Tillery et al. [50] | Investigations of speech or word (in noise) recognition, listening effort, and the influence of variables such as language skills, working memory, or stimulus presentation, i.e., auditory-only or in combination with visual stimuli, and room acoustics such as reverberation times simulated VAEs | |
Children with HAs/NH | McCreery, Walker, Spratford, Lewis, and Brennan [24] | ||||
Children with NH | Rudner, Lyberg-Ahlander, Brannstrom, Nirme, Pichora-Fuller, and Sahlen [13], MacCutcheon et al. [51], MacCutcheon et al. [52] | ||||
Adults with NH/HAs | Ricketts et al. [53] | ||||
C | Speech Comprehension Neuroimaging Studies | ||||
Speech/SIN recognition, the effect of cochlear implantation, age-related or early onset HL, and the underlying neural mechanisms that are identified by invasive (i.e., ECoG) or noninvasive (i.e., EEF, fNIRS, fMRI) neuroimaging | Adults with NH | fNIRS | Defenderfer et al. [54], Wijayasiri et al. [55], Zhang et al. [56] | Investigations of (selective attention to) speech or word in quiet or noise recognition and their underlying neural mechanisms, by means of spatial and/or temporal neural analyses, using head/earphone- or free-field loudspeaker-based auditory reproduction while manipulating auditory (and visual) stimulation or using real-life hyperscanning (i.e., measuring two or more participants at the same time) paradigms | |
Adults with NH/CI | fNIRS | Olds et al. [57], Rowland et al. [58], Anderson et al. [59] | |||
Children with NH | fNIRS | Mushtaq et al. [60] | |||
Children with CI | fNIRS | Mushtaq et al. [61] | |||
Adults with age-related HL | EEG | Puschmann et al. [62], Marsella et al. [63] | |||
Infants | EEG-fNIRS | Telkemeyer et al. [64] | |||
Adults with NH | fNIRS hyperscanning | Dai et al. [65], Jiang et al. [66] | |||
Adults with medically intractable epilepsy | ECoG | Zion Golumbic et al. [67] | |||
Adults with NH | EEG-fMRI | Puschmann et al. [68] | |||
Adults with age-related HL | fMRI | Wong et al. [69] |
Condition | Statistics | HA 1 | HA 2 | HA 3 |
---|---|---|---|---|
RTlowSsamePsame | t-value | 1.93 | 1.70 | 0.62 |
two-tailed probability | 0.11 | 0.15 | 0.57 | |
Effect size and 95% CI | 2.08 (0.58–3.55) | 1.84 (0.45–3.17) | 0.66 (−0.26–1.53) | |
Estimated percentage of HA case falling above NH group (CI) | 94.42 (71.79–99.98) | 92.51 (67.30–99.92) | 71.72 (39.90–93.74) | |
RTlowSsamePdiff | t-value | 2.73 | 1.24 | 0.83 |
two-tailed probability | 0.04 * | 0.27 | 0.45 | |
Effect size and 95% CI | 2.95 (1.00–4.87) | 1.34 (0.17–2.45) | 0.89 (−0.10–1.83) | |
Estimated percentage of HA case falling above NH group (CI) | 97.93 (84.02–100) | 86.50 (56.92–99.28) | 77.73 (46.05–96.64) | |
RTlowSdiffPsame | t-value | 2.14 | 1.97 | 1.94 |
two-tailed probability | 0.09 | 0.11 | 0.11 | |
Effect size and 95% CI | 2.31 (0.69–3.89) | 2.12 (0.60–3.61) | 2.09 (0.58–3.56) | |
Estimated percentage of HA case falling above NH group (CI) | 95.74 (75.53–100) | 94.68 (72.47–99.98) | 94.48 (71.95–99.98) | |
RTlowSdiffPsame | t-value | 3.12 | 2.26 | 0.85 |
two-tailed probability | 0.03 * | 0.07 | 0.43 | |
Effect size and 95% CI | 3.37 (1.19–5.54) | 2.44 (0.76–4.10) | 0.92 (−0.08–1.86) | |
Estimated percentage of HA case falling above NH group (CI) | 98.69 (88.35–100) | 96.35 (77.52–100) | 78.30 (46.67–96.87) | |
RThighSsamePsame | t-value | 4.72 | 5.74 | 1.12 |
two-tailed probability | 0.005 ** | 0.002 ** | 0.31 | |
Effect size and 95% CI | 5.10 (1.96–8.26) | 6.20 (2.42–10.00) | 1.21 (0.10–2.26) | |
Estimated percentage of HA case falling above NH group (CI) | 99.74 (97.47–100) | 99.89 (99.23–100) | 84.32 (53.90–98.81) | |
RThighSsamePdiff | t-value | 5.20 | 4.73 | 1.66 |
two-tailed probability | 0.003 ** | 0.005 ** | 0.16 | |
Effect size and 95% CI | 5.61 (2.18–9.07) | 5.11 (1.96–8.28) | 1.79 (0.42–3.11) | |
Estimated percentage of HA case falling above NH group (CI) | 99.83 (98.52–100) | 99.74 (97.50–100) | 92.09 (66.42–99.90) | |
RThighSdiffPsame | t-value | 2.63 | 4.28 | 2.81 |
two-tailed probability | 0.047 * | 0.008 ** | 0.04 * | |
Effect size and 95% CI | 2.84 (0.94–4.70) | 4.62 (1.75–7.50) | 3.03 (1.04–5.00) | |
Estimated percentage of HA case falling above NH group (CI) | 97.66 (82.73–100) | 99.61% (95.97–100) | 98.11 (84.98–100) | |
RThighSdiffPdiff | t-value | 1.19 | 1.33 | 0.89 |
two-tailed probability | 0.29 | 0.24 | 0.41 | |
Effect size and 95% CI | 1.28 (0.14–2.36) | 1.44 (0.23–2.59) | 0.96 (−0.06–1.92) | |
Estimated percentage of HA case falling above NH group (CI) | 85.58 (55.61–99.10) | 87.98 (59.13–99.52) | 79.30 (47.79–97.26) |
Aspect | Challenge(s) | Explanation | Considerations/Recommendations to Address Challenges |
---|---|---|---|
Task | Long task duration and long-lasting fNIRS cap wearing | Lengthy and strenuous paradigm for younger children (~30 min; especially long duration if the speech recognition in background noise is good due to staircase procedure) | Administer the task in several testing sessions/days when possible. Focus on fewer variables of interest that might affect listening in background noise. |
FNIRS measurement during adapted versions of the current task design | Repetition of task conditions | For fNIRS measurements, ideally, repetitions of testing conditions within each subject are warranted; currently, each condition is only presented once | Increase repetitions of test conditions, e.g., by several testing sessions (see recommendations for task design), to diminish the effect of noise in the fNIRS signal and measurement errors. |
Disentangling behavioral performance and manual presentation of target sentences | Currently, listening and speaking are both included in the mean concentration changes of HbT; manual presentation times of the target sentence led to differing combinations of target-distractor speech | Event-related design with fewer conditions or block design with fixed presentation times (i.e., fixed time periods for the occurrence of events) should be considered. For an overview of advantages and disadvantages of block- and event-related fNIRS designs in auditory assessments, see van de Rijt et al. [130]. | |
Perceived vs. (physical) loudness intensity | The amplitude of the fNIRS signal might be affected by sound intensity. | Loudness deviations when investigating SIN comprehension typically do not exceed 10 dB SPL. Activation differences thus hardly reflect overall sound intensity differences. Nevertheless, individual loudness perception (rather than physical intensity) appears to be related to brain activation [130,131,132] and subjective auditory loudness perception should be assessed and taken into consideration during interpretation. | |
Noise removal: Head movements and high pass filtering | Head movements are warranted during VAE simulations; however, an excessive amount might distort the NIRS signal. The long duration of the task limits the strict application of high pass filters. | For datasets that are acquired from challenging samples, few trials, lengthy paradigms, and when head movements are an important aspect of the task, combined motion artifact detection and correction techniques are highly recommended (e.g., see Jahani, Setarehdan, Boas, and Yucel [113] or Di Lorenzo, Pirazzoli, Blasi, Bulgarelli, Hakuno, Minagawa, and Brigadoi [114]). Implementation of short-separation CHs, which are sensitive to changes in superficial blood flow, is considered crucial to remove noise (i.e., extra-cerebral signal) [133,134], which is also highly relevant due to the long task duration that limits the application of strict high pass filters. When investigating various age groups, an age-corrected differential path length factor is advised [135]. | |
Speech-induced motion artifacts | Chin clasps of the cap might transfer speech-induced motion of the jar | Usage of (EEG) caps to ensure firm hold without usage of chin clasp. An attachment of the ends of the cap to the upper body should be considered. Fixate fiber bundles (if no wireless device available) to the fNIRS cap to prevent movements being transmitted from fiber bundles to optodes. | |
Localization/ROIs and lateralization | Variability in head size and shape might affect the formation of ROIs and a differential lateralization of speech-related activity might add additional variation. | The use of probe positioning units ensures correct and consistent fNIRS probe placement. Individual formation of ROIs by allocation of relative weights to the CHs depending on the probability to fall into a respective ROI (e.g., see Huppert et al. [136]) might be considered for the analyses. In addition, variability in speech lateralization due to inter alia speech content [137,138,139] should be controlled for. | |
Participants | Varying degrees of HL, HA devices, and frequency of HA use | Due to time constraints and elaborative purpose of the study design, an audiometry was performed only for the HA group that served as input for the research HAs | Future studies, assessing larger populations, should aim at controlling for varying degrees of hearing (loss) and administer detailed questionnaires about HA use, device, and fitting. |
Other factors affecting speech comprehension | Due to the small sample size, the current pilot investigation could not control for variability in hearing abilities. Auditory, linguistic as well as other cognitive mechanisms were suggested to affect speech understanding (e.g., see the ease of language understanding model Rönnberg, Lunner, Zekveld, Sörqvist, Danielsson, Lyxell, Dahlström, Signoret, Stenfelt, and Pichora-Fuller [19], Rönnberg, Holmer, and Rudner [20], Holmer, Heimann, and Rudner [21] or McCreery, Walker, Spratford, Lewis, and Brennan [24]). Speech represents a highly complex auditory signal that involves multiple brain networks. Animal models of cortical reorganization following HL highlight the widespread effects on HL beyond the auditory cortex and the interplay of multiple neural networks that, in turn, make the effects of HL on speech understanding highly individual [98,99]. | Next to audiometry, additional measures on cognition and speech performance (e.g., assessment of (verbal) IQ and speech production) were beyond the scope of the current pilot study; however, they are highly recommended for future applications. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bell, L.; Peng, Z.E.; Pausch, F.; Reindl, V.; Neuschaefer-Rube, C.; Fels, J.; Konrad, K. fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations. Children 2020, 7, 219. https://doi.org/10.3390/children7110219
Bell L, Peng ZE, Pausch F, Reindl V, Neuschaefer-Rube C, Fels J, Konrad K. fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations. Children. 2020; 7(11):219. https://doi.org/10.3390/children7110219
Chicago/Turabian StyleBell, Laura, Z. Ellen Peng, Florian Pausch, Vanessa Reindl, Christiane Neuschaefer-Rube, Janina Fels, and Kerstin Konrad. 2020. "fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations" Children 7, no. 11: 219. https://doi.org/10.3390/children7110219
APA StyleBell, L., Peng, Z. E., Pausch, F., Reindl, V., Neuschaefer-Rube, C., Fels, J., & Konrad, K. (2020). fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations. Children, 7(11), 219. https://doi.org/10.3390/children7110219