Next Article in Journal
Silk Gland Factor 1 Plays a Pivotal Role in Larval Settlement of the Fouling Mussel Mytilopsis sallei
Next Article in Special Issue
Auditory Sensory Gating: Effects of Noise
Previous Article in Journal
CopG1, a Novel Transcriptional Regulator Affecting Symbiosis in Bradyrhizobium sp. SUTN9-2
Previous Article in Special Issue
Effects of Temporal Processing on Speech-in-Noise Perception in Middle-Aged Adults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals

1
Institut de Recherche Biomédicale des Armées, 1 Place Valérie André, 91220 Brétigny sur Orge, France
2
iAudiogram—My Medical Assistant SAS, 51100 Reims, France
3
Department of Neurosensory Biophysics, INSERM U1107 NEURO-DOL, School of Medecine, Université Clermont Auvergne, 63000 Clermont-Ferrand, France
4
Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), 75005 Paris, France
5
Department of Otorhinolaryngology-Head and Neck Surgery, Rennes University Hospital, 35000 Rennes, France
6
Centre de Recherche en Neurosciences de Lyon, CRNL Inserm U1028—CNRS UMR5292—UCBLyon1, Perception Attention Memory Team, Bâtiment 452 B, 95 Bd Pinel, 69675 Bron Cedex, France
*
Author to whom correspondence should be addressed.
Biology 2024, 13(6), 416; https://doi.org/10.3390/biology13060416
Submission received: 12 April 2024 / Revised: 20 May 2024 / Accepted: 21 May 2024 / Published: 5 June 2024
(This article belongs to the Special Issue Neural Correlates of Perception in Noise in the Auditory System)

Abstract

:

Simple Summary

In professional environments, communication errors can facilitate the occurrence of accidents. Professionals working in noisy environments may have difficulty understanding speech in noise. This may be due to the masking effect of noise, but also due to auditory lesions caused by regular exposure to noise. The current audiologic tools in occupational medicine are insufficient to both assess difficulties in understanding speech in noise and monitor workers’ hearing. The aim of this study was to evaluate the relationships between different variables thought to relate to speech in noise understanding, and to identify the most important variables. Hearing thresholds at 12,500 Hz, a frequency higher than those measured with conventional audiometry, were found to be strongly related to the ability to understand speech in noise. Regular monitoring of such extended high-frequency audiometry could therefore make it possible to offer appropriate care before auditory function deteriorates critically.

Abstract

Understanding speech in noise is particularly difficult for individuals occupationally exposed to noise due to a mix of noise-induced auditory lesions and the energetic masking of speech signals. For years, the monitoring of conventional audiometric thresholds has been the usual method to check and preserve auditory function. Recently, suprathreshold deficits, notably, difficulties in understanding speech in noise, has pointed out the need for new monitoring tools. The present study aims to identify the most important variables that predict speech in noise understanding in order to suggest a new method of hearing status monitoring. Physiological (distortion products of otoacoustic emissions, electrocochleography) and behavioral (amplitude and frequency modulation detection thresholds, conventional and extended high-frequency audiometric thresholds) variables were collected in a population of individuals presenting a relatively homogeneous occupational noise exposure. Those variables were used as predictors in a statistical model (random forest) to predict the scores of three different speech-in-noise tests and a self-report of speech-in-noise ability. The extended high-frequency threshold appears to be the best predictor and therefore an interesting candidate for a new way of monitoring noise-exposed professionals.

1. Introduction

Noise is a polymorphous concept. Usually, it can refer to at least three definitions. Firstly, it may correspond to sound stimulation that is irrelevant to the listener, masking a signal of interest and thus diminishing its intelligibility. Secondly, noise can also be considered as a physical aggressor inducing damage to the auditory system. Finally, noise is an environmental stressor that can, for example, disrupt sleep and disturb the cardiovascular system. In the present study, we will mainly use the first two definitions (masker and aggressor). Understanding speech in a noisy background is difficult. It is particularly difficult for individuals occupationally exposed to noise as a physical aggressor, e.g., professional motorcycle drivers. The difficulty in understanding speech arises because the background noise contains energy in the same frequency regions as the speech (energetic masking [1]). It also arises because prolonged or excessive noise exposure can alter the auditory periphery structures, and may ultimately lead to noise-induced hearing loss [2]. However, individuals chronically exposed to noise, even with normal or near-normal audiometric thresholds, can exhibit difficulties understanding speech in noise [3,4].
Recently, several research groups have explored the hypothesis that noise exposure can induce selective synaptic loss at the synapses between the inner hair cells (IHCs) and the low-spontaneous-rate auditory nerve fibers in the cochlea, often occurring with otherwise normal or near-to-normal audiograms (see the seminal paper by Kujawa and Liberman [5]; for a review, see [6,7]. This synaptopathy has also been called hidden hearing loss (term coined by Schaette and McAlpine [8]), because its effect is not revealed by conventional audiometric measures. It is now widely assumed that clinical measures that are more sensitive than the conventional audiogram are needed [9]. However, a gold standard of these new tests and best practices is still to be defined [9,10], in order to detect early signs of hearing deficits, and to implement better prevention programs.
These new tests should be defined in relation to the difficulties in understanding speech in noise. The evaluation of speech-in-noise performance in humans varies along a large variety of factors (type of target speech, type of masker, type of response, signal-to-noise ratio, and type of paradigms, to name a few), each providing different insights into an individual’s ability to process speech in noisy environments (review in [2]. An interesting way to differentiate them in the context of this study is by their lexical complexity (phoneme or syllable, words, and sentences). It has indeed been shown that this is one of the key factors for understanding the relative influence of the cognitive processes underlying and correlated to speech-in-noise tasks (review in [11]. Classically, tests with phonemes are less sensitive to cognitive factors than sentence recognition tests. To understand the apparent discrepancy in the literature regarding the existence of noise-induced cochlear synaptopathy in humans, DiNino et al. [12] showed that the choice of the target speech and the speech-in-noise task greatly impact on whether a relationship between the speech-in-noise performance and the assumed physiological proxies of synaptopathy (electrocochleography/auditory brainstem response wave I, middle ear muscle reflex) are observed. For instance, the tests with a low lexical complexity and which maximize the importance of fine temporal details were more likely to be correlated with proxy measures of synaptopathy in humans.
The list of statistically significant predictors of speech-in-noise performance is vast, especially for individuals exposed to noise. A systematic overview being largely beyond the scope of this paper, here, we chose to focus instead on measures that would, in fine, be easily implemented in a prevention program—in addition to the conventional pure tone audiogram. Behaviorally, a decline in auditory temporal abilities (e.g., amplitude and frequency modulation thresholds) has been linked to a decline in speech-in-noise performance [13,14,15,16,17]. There is also now ample evidence for an association between extended high-frequency (EHF) audiometry, defined as frequencies above 8 kHz, and speech perception difficulties (review in [18]. Interestingly, here, noise exposure has been identified as one of the causes of EHF hearing loss [19,20]. To complement behavioral audiometric measures, electrophysiological measures of the cochlear function can be performed in individuals with normal hearing thresholds, and compared with the speech-in-noise performance (e.g., [21,22,23]). These tests include the measurement of the cochlear amplification function via the measurement of distortion product otoacoustic emission (DPOAE), or the synaptic activity between the IHC and the auditory nerve via the electrocochleography (EcochG) measurements. As explained before for speech-in-noise tasks, the different and sometimes opposite results in the literature regarding the existence and the measurement of noise-induced cochlear synaptopathy can also be linked to the very heterogenous methods used (see the recent reviews [24,25,26]). This discrepancy in the literature could also highlight the fact that variability in what we call normal thresholds or near-normal thresholds can potentially account for some of the so-called synaptopathy effects. In some of the studies in which noise exposure seems responsible for functional speech-in-noise differences in the absence of hearing loss, there is, of course, the possibility that differences in thresholds within the normal range can contribute nonetheless to the differences observed in the speech-in-noise performance [2,7]. In addition, cochlear synaptopathy is indeed very hard to study in humans, and is generally mixed with other outer hair cell (OHC) dysfunctions [22].
Finally, when studying noise-exposed individuals, the definition itself and the measurement of what is called “exposure” is crucial. As pointed out by Parker [22], one of the differences potentially explaining the discrepancies between noise-induced cochlear synaptopathy studies lies in the way noise exposure is measured. When noise exposure measurement is based on self-reports, no link is found between proxy of cochlear synaptopathy and speech-in-noise performance [20,27,28]. When controlled and homogeneous groups of individuals exposed to noise are studied (young and professional musicians in [29]; firearms users in [30]; train drivers in [31]), a correlation was found. Moreover, to investigate the effect of noise exposure, very often, groups of noise-exposed vs. controls are compared. This could be in contradiction with the idea that the outcome of noise exposure is possibly on a continuum from non-synaptopathy to synaptopathy with damage [9].
In the current study, we do not question the influence of different predictors on speech-in-noise performance, nor test the influence of one predictor on another (although it would be possible with this set of data). Instead, we investigate how to quantify and classify the various predictors of speech-in-noise performance in terms of importance. This approach has a direct clinical outcome as it allows us to establish which predictors are urgently needed for regular testing [2] in order to enhance existing hearing loss prevention policies.
In fact, the choice of the statistical model and analysis is key to our study design as they influence the way we think about the design, as well as the conclusions we can draw from the data. To illustrate this point, Yeend et al. [32] recognized that one of the limitations of their study was the use of multiple comparisons, potentially resulting in falsely identified positive effects. More recently, Balan et al. [33] emphasized, with appropriate machine learning techniques, the importance of EHF audiograms in predicting speech-in-noise performance.
In this paper, we use random forests—a machine learning tool for classification and regression. The random forest tool is intuitive, and, more importantly, it has an inherent capacity to produce measures of “variable importance”. Kim et al. [34] highlighted the usefulness of the random forest model, compared to other machine learning techniques, for predicting speech discrimination scores from pure tone audiometry thresholds.
In this study, we investigated the relative importance of several audiometric, auditory, physiological predictors on the speech-in-noise performance. The speech-in-noise performance was assessed with three different speech-in-noise tests, with different degrees of lexical complexity (consonant identification; word in noise recognition; French sentence matrix test). Our listener group consists of individuals exposed to occupational noise: professional motorcyclists. This allowed us to have a homogeneous subject group and an easy proxy measure of noise exposure (number of years of motorcycling). All the participants had normal hearing thresholds (pure tone average (PTA): mean of thresholds at 500, 1000, 2000, and 4000 Hz, inferior to 20 dB HL) according to the reference of the International Bureau for Audiophonologie [35]. For all participants, we measured the EHF audiometry at 12.5 kHz; DPOAE to evaluate the OHC function; and EcochG to assess the auditory nerve function. Temporal processing was assessed using amplitude (AM) and frequency modulation (FM) detection thresholds. Finally, we evaluated the subjective auditory consequences of each subject’s noise exposure via the speech-in-noise pragmatic scale of the Speech, Spatial and Qualities of Hearing Scale (SSQ) questionnaire [36,37].

2. Materials and Methods

2.1. Overview

The experiment was conducted over a week during a span of 3 half-days, each dedicated to specific sets of experimental sessions. Two half-days were composed of a speech-in-noise audiometry test and a behavioral test (for instance, a session of consonants identification followed by a session of AM detection). The third one was composed of a speech-in-noise audiometry test, recordings of DPOAE and EcochG, and the questionnaires (demographic and SSQ speech-in-noise pragmatic scale). The order of all tests was randomized with at least one speech-in-noise test during each half-day.

2.2. Participants

Seventy-three participants (72 men; mean and standard deviation age: 38 ± 7.6 years (Figure 1) took part in the present study. All were professional motorcyclists occupationally exposed to noise (duration of exposure ranging between 1 year and 31 years, with a median duration of 8 years). The noise to which motorcyclists are exposed comes either from radio communications, air turbulence, or the motorcycle engine. The noise levels usually vary between 94.6 dB Leq8h and 103.6 dB. However, motorcyclists use noise-cancelling earplugs. Therefore, the actual levels vary between 69.6 and 81 dB Leq8h. In a typical week of work, the duration of daily noise exposure varies between 2 and 7 h. The median duration is approximately 4 h.
Three participants were excluded because their PTA was above or equal to 20 dB HL. The 70 remaining participants had a PTA considered normal in both ears according to the Internation Bureau for Audiophonology calculation [5]. Maximal age was limited to 55 years to reduce the risk of presbycusis. Informed consent was obtained from all participants involved in this study. This study was approved by Comité de protection des personnes sud-ouest et outre-mer II (IDRCB 2017-A00859–44).

2.3. Mobile Laboratory

The tests were carried out in a mobile hearing laboratory (Figure 2 and Figure 3), consisting of four audiometric booths. Each booth is equipped with experimental instruments that can be remotely controlled from the control room. The four booths were used simultaneously to optimize the experimental time for a group of participants. An audio and video system enabled communication between the experimenter and each of the four participants individually or simultaneously, and was used to remind them of the instructions, maintain motivation, and monitor their state of arousal.

2.4. Speech-in-Noise Audiometry

Each participant performed three different speech-in-noise audiometry tests: 1. the consonant identification test, 2. the word recognition test, and 3. the French matrix test (FrMatrix). The masking provided by the noise was energetic (no informational masking was used). A closed-set paradigm was used for the consonant identification test whereas an open-set paradigm was used for the word recognition and the FrMatrix tests.

2.5. Consonant Identification Test

The consonant identification test consists of the presentation of 48 nonsense vowel–consonant–vowel–consonant–vowels (VCVCVs) spoken by a French female talker presented in a spectro-temporally modulated noise at −10 dB SNR. The signal was presented monaurally on the right ear. The 48 presentations came from three recordings of 16 French consonants (C = /p, t, k, b, d, g, f, s, ∫, v, z, ᶚ, l, m, n, r/), systematically associated with the vowel /a/. The duration of each presentation was, on average, 1272 ± 113 ms.
For each trial, the participant had to indicate the perceived consonant by clicking on a matrix of 16 different consonants presented visually in front of them. No feedback was provided. The identification score corresponded to the percentage of correct answers. The presentation level was 65 dB SPL.

2.6. Words-in-Noise Recognition

Ten different lists were presented in the right ear of each participant. Each list consisted of 25 monosyllabic French words. Four different SNR ratios were compared: −5, 0, 5, 10 dB, in a speech-shaped noise and in silence. For each SNR, two lists (i.e., 50 words) were presented. The order of presentations was randomized and the association between lists and SNR conditions was counterbalanced. Each participant had to write, using the keyboard, the word they heard. Participants were instructed to write the words as they heard them, even if only one phoneme was heard, and to respect phoneme to grapheme conversion in the French language, not minding spelling mistakes. Each correspondence between the written word and target word from the list was then manually checked by 2 independent observers.

2.7. French Sentence Matrix Test

The French version of the sentence matrix test [38] was used to determine the speech reception threshold of the participants. The sentences are all constructed using the same pattern: a first name, a verb, a number, the name of an object and a color (for instance, “Jean Luc ramène trois vélos roses”). There are 10 possible choices for each word category. A test session consists of 20 sentences presented with an adaptative staircase procedure. The listener is seated one meter away facing the loudspeaker emitting the sentences and the noise. The participant’s task is to repeat aloud the words. The signal-to-noise ratio (fixed noise level at 65 dB SPL) varies from sentence to sentence depending on the number of correct words given by the participant in order to obtain the 50% speech reception threshold (SRT). Each participant performed three sessions. The final SRT of each participant (i.e., the dependent variable) is the best (i.e., lowest) value from the three sessions. The normative value is of −6 dB SNR (standard deviation 0.6 dB [39]).

2.8. Speech, Spatial and Quality of Hearing Questionnaire

In addition to the above-described behavioral measures of speech intelligibility in noise, the participant also completed a self-report measure.
The Speech, Spatial and Qualities of Hearing Scale (SSQ) questionnaire enables measurement of a participant’s ability in various listening situations [36] using a numerical gradation from 0 (no, not at all) to 10 (yes, perfectly). The questionnaire is divided into three subscales: speech comprehension (14 questions), spatial hearing (17 questions), and hearing quality (18 questions).
The closer the numerical value is to 10, the more the subject feels able to perform the task described. We used a French version of the questionnaire that was previously validated [37,40]. The averages of items 1, 4, and 6 of the speech comprehension subscale were combined into the “speech-in-noise” pragmatic scale [41].

2.9. Predictors of Speech-in-Noise Tests

In order to identify the best predictors of speech intelligibility in noise, several physiological and behavioral measurements were conducted. Altogether, when taking into account several markers (i.e., variables) for each measure type, a set of 48 variables was obtained. They are all described below. They were all expected to predict to a certain degree the speech-in-noise performance as measured by the four speech-in-noise tests described above (consonant identification, word recognition, FrMatrix, and the speech-in-noise pragmatic scale of the SSQ).

2.9.1. Pure Tone Audiometry

The audiometric thresholds were recorded with an automatic procedure with the EDM-Echodia Elios® system (Le Mazet-Saint-Voy, France) with Radiohear headphones DD45 at the left ear and at the right ear for the frequencies 125, 250, 500, 1000, 2000, 4000, 8000, and 12,500 Hz (Figure 4). The 12,500 Hz frequency was defined as the EHF threshold. The four frequencies’ pure tone averages (PTA; 500, 1000, 2000, 4000) were computed in both ears, and the best ear PTA was identified as the lower PTA across the two ears. Therefore, nineteen predictor values were obtained from the pure tone audiometry.

2.9.2. Amplitude and Frequency Modulation Detection Thresholds

A total of eight thresholds were obtained for each participant using a two-interval forced-choice procedure from the combination of modulation type (AM or FM), sinusoidal carrier signal frequency (500 or 4000 Hz), and stimulus intensity (10 or 60 dB SL). The standard signal was unmodulated, i.e., the modulation depth (Δ) was set to 0. The target signal was modulated and the value of the modulation depth, Δ, was adaptively modified in order to determine the threshold. All stimuli were generated digitally using a sampling rate of 44.1 kHz, and presented to participants at a presentation level of 10 dB SL or 60 dB SL, using Beyer DT 770 headphones and an external AudioEngine D3 sound card. Stimuli were presented monaurally to the right ear. Each trial consisted of a target modulated signal and a standard unmodulated signal, presented in random order, and separated by a 600 ms silence interval. The participant was instructed to indicate the stimulus containing the modulation, and was informed of the accuracy of their response by a light signal (green if correct, otherwise, red). Each stimulus’ duration was 1200 ms.
Threshold was determined using a “2-down-1-up” method: Δf decreased when the participant responded correctly twice consecutively, and increased in the event of an error. The test stopped after 14 inversions, defined as an increase followed by a decrease in ∆ or vice versa. The detection threshold was calculated from the average ∆ over the last six inversions. Three threshold estimates were made for each intensity level condition (10 dB SL and 60 dB SL), and each type of modulation (AM and FM), and each carrier frequency (500 and 4000 Hz). The final value for each condition tested corresponded to the best performance obtained.
In our study, it appeared that some participants were not able to perform the task for three conditions in the FM detection task: 4000 Hz carrier frequency at 10 dBSL; 4000 Hz carrier frequency at 60 dB SL; and 500 Hz carrier frequency at 10 dB SL. To take this into account, we created 3 two-level categorical variables according to the ability of the participant to perform the task (able/not able). Therefore, 11 predictors were obtained from the AM and FM detection thresholds.

2.9.3. Distortion Products of Otoacoustic Emissions

Distortion product otoacoustic emissions (DPOAEs) were collected with the EDM-Echodia Elios® system (Le Mazet-Saint-Voy, France). An f2/f1 ratio of 1.20 was used at intensity levels of f1 = 75 dB SPL and f2 = 65 dB SPL. The amplitude of DPOAEs were recorded at frequencies of 1, 2, 3, 4, and 5 kHz in both ears (recorded values of −10 dB SPL or lower were discarded) to obtain 10 predictors per participant.

2.9.4. Electrocochleography

Extratympanic electrocochleography (EcochG) was conducted with the Echodia Elios® system (France). Two electro-encephalogram electrodes were placed on the forehead of the participant (one centered and one off-center, the two on the hairline). The extratympanic electrode was a gold-coated soft Tiprode, positioned in the outer ear canal. The electric impedances of the electrodes were checked to be below 5 kΩ. Acoustic stimuli were short clicks delivered at a rate of 11/s. The recordings were collected at 90 dB nHL, then at 80 dB nHL. For each level, the procedure consisted of averaging 500 responses, repeated two or three times depending on the consistency of the waveforms across the 500 responses. For each waveform, the amplitude of wave I was assessed by the difference in voltage between the first peak occurring between 1 and 2.5 ms and the next trough. Then, the amplitudes of the two most consistent waveforms were averaged. Furthermore, the slope of the input/output function obtained by linking the two stimulation levels (80 and 90 dB nHL) and the wave I amplitude was computed for each ear. Accordingly, six predictors (wave I amplitude at 80 and 90 dB nHL at both ears plus wave I slope at both ears) per participant were obtained.

2.9.5. Random Forest Analysis

In addition to the 48 predictor variables described above, 3 were added: the age; the number of years of motorcycling; and the history of hearing pathology (otitis media or acute acoustic trauma). A total of 47 variables were continuous and 4 were categorical. The main goal here was to identify the most important predictors of speech-in-noise performance.
In order to perform this importance analysis, we used random forest algorithms. Recently, biomedical research in general has found an interest in this machine learning paradigm, given its interpretability, its nonparametric approach with large use case, and the potential mix between continuous and categorical variables [42]. Random forests have already been identified as an interesting choice among machine learning algorithms in hearing sciences [33,34,43].
A random forest is a combination of 500 decision trees. Each decision tree is built from a random sample of the population and a random sample of the variables to reduce the risk of overfitting. Next, all 500 trees are combined to build a model and make a prediction. A prediction error is computed from the data excluded from the random samples (“error out of the box”). The difference between the observed value to predict and the actual prediction is represented by the mean square error (MSE). To assess the importance of a variable, the impact of random permutations of that variable is measured on the MSE. The more the MSE increases, the more important the variable is.
In order to have a reliable measure of each variable’s importance, the non-scaled importance measure was computed on 10 subsamples and then averaged across samples. The subsamples were built by randomly selecting 75% of the original data sample.
We used the randomForest R package Version: 4.7-1.1 with the hyperparameters set by default. Missing values were handled by the command “na.action==na.roughfix”.
In addition to the importance graph for each speech-in-noise test, we described the 9 most important variables and the correlations between the speech-in-noise performance and the variable (predictor). Hence, nine scatterplot graphs are plotted for each speech-in-noise audiometry test. On each scatterplot, the Spearman coefficient of correlation, its p-value, and the size sample are indicated. The Spearman coefficient was chosen against the Pearson coefficient because several variables were not normally distributed (e.g., speech-in-noise pragmatic scale, years of motorcycling), and for consistency with the nonparametric algorithm of the random forest.

2.9.6. Missing Values

The global sample size was 70 participants. However, due to various obstacles encountered during the experiment (mainly professional availability; hardware malfunctions; inability to record some of the measures for some participants; see explanations above), the sample size for each variable was less than 70 (see Table 1 for details).

3. Results

For the three speech-in-noise audiometry tests, large inter-individual differences were observed, as evidenced by the large interquartile ranges (Figure 5). The results for all the other tests (i.e., the predictors) are represented in Appendix A.

3.1. Consonant Identification

3.1.1. Predictor Importance

The EHF hearing threshold was the most important variable by far (an importance value of 7.5 as compared to 3.9 for the AMDT_60 dB_500 Hz; see Figure 6). For the consonant identification scores, the conventional audiometric values turned out to be important predictors too: the 8000 Hz audiometric thresholds; the PTA for the left ear; and the best ear PTA (Figure 6).

3.1.2. Correlations between Predictors and Consonant Identification Scores

Significant correlations were also observed for the suprathreshold measures of the temporal coding, namely, the AM detection threshold and the FM detection threshold both measured at 60 dBSL for a 500 Hz carrier frequency (Figure 7).

3.2. Words-in-Noise Recognition

3.2.1. Predictor Importance

For this speech-in-noise audiometry test, the EHF thresholds at both ears were also among the most important variables, and their correlation with the words-in-noise recognition thresholds were among the highest (Figure 8). The FM detection threshold at 60 dB SL for a 500 Hz carrier frequency appears to be the most important but the correlation with the words-in-noise recognition results was nonsignificant (Figure 9).

3.2.2. Correlations between Predictors and Words-in-Noise Recognition Scores

This discrepancy could be due to the interaction between predictors not assessed here. Similarly, the DPOAEs appeared important but the coefficient of correlation with the words-in-noise recognition results was not significant. The years of motorcycling practice were correlated with the words-in-noise recognition results. The history of hearing pathology was pointed out as an important variable. Indeed, the participants with a history of otitis media had poorer words-in-noise recognition results.

3.3. French Matrix Test

3.3.1. Predictor Importance

Again, the EHF thresholds were the most important predictors of the FrMatrix score (Figure 10). Two conventional audiometric thresholds were also among the important variables (4000 Hz and 125 Hz measured in the right ear) but only the correlation with the 4000 Hz audiometric threshold was significant (Figure 11). The AM detection threshold with a 500 Hz carrier frequency appeared important but the correlation was significant only at 60 dB SL (like with the consonant identification score) and not at 10 dB SL.

3.3.2. Correlations between Predictors and French Matrix Test Scores

Like with the words-in-noise recognition results, the FrMatrix was related to the years of motorcycling practice. The DPOAEs appeared among the important variable; nevertheless, the coefficient of correlation did not reach significance. The history of hearing pathology was also an important variable, and, similarly to the words-in-noise recognition, the participants with a history of otitis media showed weaker scores.

3.4. Speech-in-Noise Pragmatic Scale from the Speech, Spatial and Quality of Hearing Questionnaire

3.4.1. Predictor Importance

The years of motorcycling practice was the most important variable as assessed by the model (Figure 12), and confirmed by the significant correlation with the speech-in-noise pragmatic scale (Figure 13).

3.4.2. Correlations between Predictors and Speech-in-Noise Pragmatic Scale

The EHF was found to be significantly correlated to the speech-in-noise pragmatic scale. Several conventional audiometric thresholds were also present among the important variables: the right ear PTA, the right ear 8000 Hz threshold, and the left ear 1000 Hz threshold. However, the correlation with the left ear 1000 Hz threshold was not significant. Age also appeared to be important. The DPOAE measured at 3000 Hz in the left ear appeared to be one of the important variables but the correlation was nonsignificant.

3.5. Relationship across the Speech-in-Noise Tests

3.5.1. Speech-in-Noise Tests

The consonants in the noise score correlated with the words-in-noise recognition, which was correlated with the FrMatrix (Figure 14). However, the consonants in the noise score were not correlated with the FrMatrix.

3.5.2. Speech-in-Noise Pragmatic Scale and Speech-in-Noise tests

The self-reported speech-in-noise pragmatic scale was correlated with the consonant identification score and the words-in-noise recognition threshold but not with the FrMatrix score (Figure 15).

4. Discussion

The aim of this study was to identify the most relevant variables to monitor in a population of noise-exposed professionals. Different physiological and behavioral variables were assessed as predictors of three speech-in-noise tests and the self-reported speech-in-noise abilities. Despite the weak correlations between the speech-in-noise tests, the EHF threshold appeared to be the variable most often related to the speech-in-noise scores. Among the behavioral variables, those related to temporal coding and conventional audiometric thresholds were also highlighted. Concerning the physiological variables, the DPOAE measured at 1000 and 3000 Hz appeared to be the most important variables, although the correlations were weaker than those between the behavioral variables mentioned above.

4.1. Comparisons between Speech-in-Noise Tests

To the best of our knowledge, no previous study has looked into the relationship between speech-in-noise audiometry tests with normal- or near-normal-hearing listeners.
In the current study, weak correlations were observed between the three speech-in-noise audiometry tests. Those weak correlations could illustrate the fact that the speech-in-noise audiometry tests differed in their demand on auditory vs. non-auditory abilities (e.g., linguistic abilities, working memory [12]). Most probably, the consonant identification test has the lowest cognitive demand in comparison to the other tests. This is because of the use of non-word stimuli, which do not require linguistic skills, and the use of a closed-set paradigm that do not demand working memory skills as much [12]. The FrMatrix test probably has the highest cognitive demand with a sentence of five words and an open-set paradigm, although the number of alternatives per word categories is limited to ten. The words-in-noise recognition test is assumed to have intermediate cognitive demand as it was designed to reduce cognitive demand by choosing frequent French words with phonological neighbors, while presented in an open-set paradigm [44]. The hierarchy in cognitive demand across the three speech-in-noise audiometry tests could explain the pattern of observed correlation across the tests: consonant identification was correlated only with words-in-noise recognition, while the FrMatrix was correlated only with the words-in-noise recognition. This result emphasizes, within the same group of participants, the relative influence of sensory and cognitive factors on speech-in-noise task performance [12]. This is coherent with the results reviewed by Dryden et al. [11]. However, intra-individual variability due to learning effects [9] could have blurred the correlations across the tests. This learning effect bias was controlled firstly by conducting three FrMatrix sessions, and secondly by providing a closed-set response choice in the consonant identification test. Finally, although the learning effect cannot be excluded in the words-in-noise recognition test, that test was designed to reduce top-down influences [45].
Moreover, the three speech-in-noise audiometry tests assessed in the current study also presented weak correlations with the speech-in-noise pragmatic scale of the SSQ, a self-reported measure. This could emphasize the importance of using tests other than questionnaires in a prevention program. However, the lack of a relationship could also reflect the low ecological aspect of the tests. None of the speech-in-noise tests used in the current study relied on semantic context or everyday sentences. In a large sample study (N = 195 near-normal-hearing listeners), Stenbäck and colleagues [46] found a correlation between the speech SSQ scale and a speech-in-noise test using everyday sentences (HINT [47]), but not with a less ecologically valid test [48]. This point will still need to be reproduced and confirmed before being implemented in clinical prevention programs.

4.2. Speech-in-Noise Test Predictors

4.2.1. Audiometric Thresholds

The importance of the EHF thresholds as a predictor for the three speech-in-noise audiometry tests is one of the most important results here.
Correlations between the EHF threshold and speech-in-noise tests have been highlighted in many previous studies involving normal-hearing listeners [29], including populations similar to ours (middle aged, near-normal hearing thresholds [32,49]. Two explanations, at least, have been suggested [18,49]. The first explanation is a direct causal relationship: the speech signal in the frequency region above 8 kHz could be useful to identify words presented in noisy background [50], especially consonants, like voiceless fricatives [51]. We have retrospectively explored this hypothesis by analyzing the signal spectrum of the speech from the three tests used here. Almost all of the acoustic energy was below 8 kHz (see results in Appendix B), suggesting that this explanation is not adapted to our study.
A second possible explanation is that chronic exposure to high-level noise could cause both cochlear synaptopathy, which alters speech-in-noise performance, and damage to basal outer hair cells, which causes EHF threshold shift [29]. Indeed, extreme basal outer hair cells do not stand the oxidative stress produced by their overstimulation due to high noise levels because of their incapacity to maintain calcium homeostasis [52]. In the same way, high noise levels induce the massive and toxic release of glutamate in the synapses of low-spontaneous-rate auditory fibers, leading to the destruction of the synapses [53].
A third explanation could be that the tuning curve of auditory fibers coding for EHF broadens when the acoustic stimulation reaches high levels. Thus, they could provide temporal information and improve the coding of lower frequencies as well [50]. Nevertheless, in our study, this explanation is unlikely given the relatively moderate levels used for stimulus presentation (around 65 dB SPL).
In addition, the EHF threshold was found to be correlated with several different cochlear aggressors beyond those suspected of inducing synaptopathy (age and noise), but also drugs (cisplatin), diabetes [54], or smoking [55]. Therefore, the EHF threshold could be a kind of cochlear aggressor integrator, with each aggressor affecting speech understanding in various ways.
The high-frequency conventional audiometric thresholds and/or their combination represent several important variables for the consonant identification and FrMatrix tests. These results show that the audiometric thresholds remain informative for normal- or near-normal-hearing listeners. Most of the studies involving normal-hearing listeners explained individual differences in the speech-in-noise performance by noise-induced synaptopathy, even if the mere existence of noise-induced cochlear synaptopathy in living humans is still questionable [25]. Our results suggest that individual differences in conventional audiometric thresholds (EHF excluded), even if they remain in the “normal” range, are also interesting to explain individual differences in the speech-in-noise performance for a noise-exposed population. Not dividing our participants in several groups, or comparing them with another non-exposed control group, was one of the key points in our study design. Indeed, it is interesting to note here that conducting a study based on group comparisons, when the individuals within the group are very homogeneous in terms of conventional audiometric thresholds, might not be a relevant strategy to evaluate the predictors of speech-in-noise performance. Nevertheless, this strategy is often used to compare groups according to their noise exposure to search for noise-induced synaptopathy [20,27,29].
Finally, we think that focusing on a group of individuals exposed to one type of noise (motorcycle) in a very similar way (with one main variable link to the number of years of motorcycling) was instrumental to limit the large variability inherent to this type of study. The motorcycle noise could have caused both synaptopathy and audiometric threshold shift through outer hair damage, gradually and similarly among the population. In other studies exploring populations with normal hearing but different kinds of noise exposure, the correlation could have been blurred: some types of noise could have a greater impact on outer hair cells (impulse noise) and other types of noise cause more impact on the synapses (steady state noise) [26].

4.2.2. Amplitude and Frequency Modulation Detection

High permutation importance and significant correlations were observed between the consonant identification score and the AM and FM detection thresholds for low-frequency carrier and high-intensity level. Measures of temporal coding also appeared to be important for the words-in-noise recognition and the FrMatrix.
These results are consistent with an alteration in the temporal coding abilities induced by the synaptopathy, as has been suggested in humans [56] and in rodents [57]. The synaptopathy reduces the number of auditory fibers, hence the fidelity of the neural phase locking. Indeed, to obtain usable information, the activity of several fibers must be combined, due to their stochastic activity. However, temporal coding is thought to play a role in speech comprehension in noisy environments [17,58], by helping the segregation between the noise and signal. Moreover, as temporal coding is crucial for segregation between speech streams, a higher importance of variables related to temporal coding would have been expected for speech-in-speech tasks [17] rather than speech-in-noise tasks, as used in the current study. Furthermore, in our middle-aged population, aging per se [59] or age-related synaptopathy [60] could have also altered the temporal coding abilities of the participants.

4.2.3. Age

Age did not appear to be one of the important variables for any of the speech-in-noise audiometry tests. This was an unexpected result given that, even in the age span of our population, aging can alter the speech-in-noise performance in many ways (alteration of outer hair cells, of synapses between inner hair cells and auditory fibers [60], decrease in temporal coding abilities) [59,61]. The fact that age did not appear as a factor per se could mean that its effect was well captured by the measurements as well as the variable “years of motorcycling”. However, age is also related to a factor that we did not explore in our study: the working memory. The working memory is expected to play a role in the speech-in-noise intelligibility according to recent models (cognitive hearing science) [62,63]. The working memory could be altered even within our population’s age span [64]. Nevertheless, the working memory was perhaps not relevant in our study for explaining the individual differences in the speech-in-noise performance. First, a meta-analysis showed that the working memory was not a relevant factor to explain the individual differences among normal-hearing listeners [65]. Second, although we did not explicitly measure it, the level of education, which is related to the working memory, was relatively homogeneous in our population. Moreover, in our study, we used only speech-in-noise tasks. A speech-in-speech task would have shown a stronger relationship with age [66].

4.2.4. Physiological Measurements: Distortion Products of Otoacoustic Emissions and Electrocochleography

The amplitude of wave I was not related to the speech-in-noise performance in our study. However, we hypothesized that their exposure to noise should have caused synaptopathy. By definition, synaptopathy implies a decrease in the number of synapses, hence, a lower amplitude of wave I. However, many studies did not observe a correlation between the wave I amplitude and speech-in-noise performance [25,27], in contrast to the findings of [29,67]. Methodological aspects have been proposed to explain this discrepancy [25]. The main difficulty is to target the low-spontaneous-rate fibers [53] in the recorded electrical signal. Different strategies have been used (wave I growth function [68], summing potential/action potential ratio [29]). A promising technique would be to isolate the low-spontaneous-rate fibers thanks to an ipsilateral noise saturating the high-spontaneous-rate fibers [69]. In our study, the three tests relied on energetic masking. Nevertheless, informational masking with different intelligible speech streams would perhaps have been more efficient to reveal the relationships with EcochG [12]. Indeed, the segregation of speech streams requires sufficient temporal coding abilities to identify the target speech based on voice cues or localization cues. As the temporal coding of speech signals is supposed to be strongly linked to the number of functional low-spontaneous-rate fibers, synaptopathy would be particularly efficient to reduce speech-in-speech intelligibility.
Previous studies have suggested that the outer hair cell function could also be responsible for speech-in-noise performance [22]. Our results confirmed this by highlighting the DPOAE as an important predictor for words-in-noise recognition and the FrMatrix. Interestingly, Parker [22] found correlations between the speech-in-noise tests and the DPOAE in the same range of frequencies and magnitude of correlation coefficients but for a lower intensity of stimulation (65/55 vs. 75/65 dB SPL). Our parameters of the DPOAE intensity stimulation were not classical [70]. Nevertheless, they probably helped improve the signal-to-noise ratio.

4.3. Clinical Implications

The importance of EHF thresholds to predict the speech-in-noise performance suggests that it could be a relevant variable for screening a noise-exposed population. With most commercial audiometers, it is now possible to measure the EHF thresholds with the same equipment as the conventional frequencies, at least up to 12,500 Hz. If an EHF threshold shift is detected, then further tests could be proposed in order to target the auditory dysfunction in more detail. Tests could include speech-in-noise audiometry, physiological measurements (DPOAE, EcochG), and/or questionnaires (SSQ, tinnitus screening questionnaire). Finally, the early diagnosis of an auditory dysfunction would lead to appropriate and timely medical care (hearing aids, strategies for challenging listening situations, strategies to protect residual hearing) and prevent accompanying disabilities. Therefore, we suggest that EHF testing should be performed as a new standard [18,71].

4.4. Limits of the Study

The interpretation of the results was complicated by the fact that some variables were identified as important although non significantly related to the speech-in-noise performance. That result could indicate potential complex interactions between the variables, which could be explored in further studies employing a larger sample of participants. Another limitation of the study is the lack of gender diversity, which is, unfortunately, inherent to the studied population. It represents a side effect of choosing a homogeneous exposed population. Another side effect is the potential lack of the generalizability of the results to the entire population. Future works could be conducted with groups of subjects exposed to different kinds of occupational noise.

5. Conclusions

A high number of behavioral or physiological measures are related to speech intelligibility in noisy backgrounds. The EHF threshold appears to be one of the most important factors for predicting different speech-in-noise tests within a noise-exposed population. The EHF threshold could reveal not only the impact of noise exposure but also of numerous auditory aggressors. Therefore, adding EHF thresholds to the monitoring of a noise-exposed population could help to prevent difficulties in understanding speech in noise and an alteration of their professional and personal lives.

Author Contributions

Conceptualization, G.A., C.S. and N.P.; methodology, G.A., C.S., N.P., A.M., F.G., N.W. and V.I.; software, N.W. and V.I.; validation, G.A.; formal analysis, G.A. and V.I.; investigation, G.A., C.S. and F.G.; resources, G.A., A.M., F.G. and N.W.; data curation, G.A.,C.S. and A.M.; writing—original draft preparation, G.A. and C.S.; writing—review and editing, G.A., C.S., N.P., A.M., F.G. and V.I.; visualization, G.A. and V.I.; supervision, G.A.; project administration, G.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and approved by the ethics committee “Comité de protection des personnes” sud-ouest et outre-mer II (IDRCB 2017-A00859–44)).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors warmly thank Christian Lorenzi, Véronique Zimpfer, Geoffroy Blanck, Thibaut Fux, Elodie Vannson for their helpful support and Jean Christophe Bouy for the software development.

Conflicts of Interest

Author Nihaad Paraouty and Nicolas Wallaert was employed by the company iAudiogram—My Medical Assistant SAS. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A

Appendix A.1. Amplitude Modulation and Frequency Modulation Detection Threshold

Figure A1. Amplitude modulation detection threshold as a function of sensation level and carrier frequency. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A1. Amplitude modulation detection threshold as a function of sensation level and carrier frequency. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Biology 13 00416 g0a1
Figure A2. Frequency modulation detection threshold as a function of sensation level and carrier frequency. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A2. Frequency modulation detection threshold as a function of sensation level and carrier frequency. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Biology 13 00416 g0a2

Appendix A.2. Electrocochleography

Figure A3. Wave I amplitude as a function of click level and ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A3. Wave I amplitude as a function of click level and ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Biology 13 00416 g0a3
Figure A4. Electrocochleography wave I slope as a function of the ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A4. Electrocochleography wave I slope as a function of the ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Biology 13 00416 g0a4

Appendix A.3. Distortion Products of Otoacoustic Emissions

Figure A5. DPOAE as a function of frequency and ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Figure A5. DPOAE as a function of frequency and ear. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual points.
Biology 13 00416 g0a5

Appendix B

Appendix B.1. Acoustical Analyses of the Three Speech Corpora

For these analyses, the ‘consonant’ and ‘words’ corpora contain all 48 and 291 sound files, respectively, while the ‘FrMatrix’ corpus contains the recordings of 100 random sentences.

Appendix B.2. Upper Frequency Bound Comprising 99% of the Total Power of the Spectrum

Figure A6. Upper frequency bounds under which 99% of the total power of the spectrum of the speech signals are comprised, for the three speech corpora. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual speech sounds. The dashed line points out the frequency of 8000 Hz.
Figure A6. Upper frequency bounds under which 99% of the total power of the spectrum of the speech signals are comprised, for the three speech corpora. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual speech sounds. The dashed line points out the frequency of 8000 Hz.
Biology 13 00416 g0a6

Appendix B.3. Ratio of Acoustical Power in High vs. Low Frequencies

Figure A7. Ratio of acoustical power contained in high vs. in low frequencies, with a cutoff frequency of 8 kHz, for the three speech corpora. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual speech sounds. The majority of speech sounds present at least 20 dB more energy in low (below 8 kHz) vs. in high frequencies.
Figure A7. Ratio of acoustical power contained in high vs. in low frequencies, with a cutoff frequency of 8 kHz, for the three speech corpora. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Dots show individual speech sounds. The majority of speech sounds present at least 20 dB more energy in low (below 8 kHz) vs. in high frequencies.
Biology 13 00416 g0a7

References

  1. Moore, B.C.J. An Introduction to the Psychology of Hearing, 5th ed.; Academic Press: San Diego, CA, USA, 2003. [Google Scholar]
  2. Le Prell, C.G.; Clavier, O.H. Effects of noise on speech recognition: Challenges for communication by service members. Hear. Res. 2017, 349, 76–89. [Google Scholar] [CrossRef] [PubMed]
  3. Hope, A.J.; Luxon, L.M.; Bamiou, D.-E. Effects of chronic noise exposure on speech-in-noise perception in the presence of normal audiometry. J. Laryngol. Otol. 2013, 127, 233–238. [Google Scholar] [CrossRef] [PubMed]
  4. Alvord, L.S. Cochlear Dysfunction in “Normal-Hearing” Patients with History of Noise Exposure. Ear Hear. 1983, 4, 247. [Google Scholar] [CrossRef] [PubMed]
  5. Kujawa, S.G.; Liberman, M.C. Adding insult to injury: Cochlear nerve degeneration after “temporary” noise-induced hearing loss. J. Neurosci. 2009, 29, 14077–14085. [Google Scholar] [CrossRef]
  6. Liberman, M.C.; Kujawa, S.G. Cochlear synaptopathy in acquired sensorineural hearing loss: Manifestations and mechanisms. Hear. Res. 2017, 349, 138–147. [Google Scholar] [CrossRef] [PubMed]
  7. Plack, C.J.; Barker, D.; Prendergast, G. Perceptual consequences of “hidden” hearing loss. Trends Hear. 2014, 18, 2331216514550621. [Google Scholar] [CrossRef] [PubMed]
  8. Schaette, R.; McAlpine, D. Tinnitus with a normal audiogram: Physiological evidence for hidden hearing loss and computational model. J. Neurosci. 2011, 31, 13452–13457. [Google Scholar] [CrossRef] [PubMed]
  9. Le Prell, C.G.; Brungart, D.S. Speech-in-Noise Tests and Supra-threshold Auditory Evoked Potentials as Metrics for Noise Damage and Clinical Trial Outcome Measures. Otol. Neurotol. 2016, 37, e295. [Google Scholar] [CrossRef] [PubMed]
  10. Plack, C.J.; Léger, A.; Prendergast, G.; Kluk, K.; Guest, H.; Munro, K.J. Toward a Diagnostic Test for Hidden Hearing Loss. Trends Hear. 2016, 20, 2331216516657466. [Google Scholar] [CrossRef]
  11. Dryden, A.; Allen, H.A.; Henshaw, H.; Heinrich, A. The Association Between Cognitive Performance and Speech-in-Noise Perception for Adult Listeners: A Systematic Literature Review and Meta-Analysis. Trends Hear. 2017, 21, 2331216517744675. [Google Scholar] [CrossRef]
  12. DiNino, M.; Holt, L.L.; Shinn-Cunningham, B.G. Cutting Through the Noise: Noise-Induced Cochlear Synaptopathy and Individual Differences in Speech Understanding Among Listeners With Normal Audiograms. Ear Hear. 2022, 43, 9–22. [Google Scholar] [CrossRef] [PubMed]
  13. Strelcyk, O.; Dau, T. Relations between frequency selectivity, temporal fine-structure processing, and speech reception in impaired hearing. J. Acoust. Soc. Am. 2009, 125, 3328–3345. [Google Scholar] [CrossRef] [PubMed]
  14. Hopkins, K.; Moore, B.C.J. The effects of age and cochlear hearing loss on temporal fine structure sensitivity, frequency selectivity, and speech reception in noise. J. Acoust. Soc. Am. 2011, 130, 334–349. [Google Scholar] [CrossRef] [PubMed]
  15. Ruggles, D.; Bharadwaj, H.; Shinn-Cunningham, B.G. Normal hearing is not enough to guarantee robust encoding of suprathreshold features important in everyday communication. Proc. Natl. Acad. Sci. USA 2011, 108, 15516–15521. [Google Scholar] [CrossRef] [PubMed]
  16. Bharadwaj, H.M.; Masud, S.; Mehraei, G.; Verhulst, S.; Shinn-Cunningham, B.G. Individual differences reveal correlates of hidden hearing deficits. J. Neurosci. 2015, 35, 2161–2172. [Google Scholar] [CrossRef] [PubMed]
  17. Füllgrabe, C.; Moore, B.C.J.; Stone, M.A. Age-group differences in speech identification despite matched audiometrically normal hearing: Contributions from auditory temporal processing and cognition. Front. Aging Neurosci. 2015, 6, 347. [Google Scholar] [CrossRef] [PubMed]
  18. Lough, M.; Plack, C.J. Extended high-frequency audiometry in research and clinical practice. J. Acoust. Soc. Am. 2022, 151, 1944. [Google Scholar] [CrossRef] [PubMed]
  19. Le Prell, C.G.; Spankovich, C.; Lobarinas, E.; Griffiths, S.K. Extended High Frequency Thresholds in College Students: Effects of Recreational Noise. J. Am. Acad. Audiol. 2013, 24, 725–739. [Google Scholar] [CrossRef] [PubMed]
  20. Prendergast, G.; Millman, R.E.; Guest, H.; Munro, K.J.; Kluk, K.; Dewey, R.S.; Hall, D.A.; Heinz, M.G.; Plack, C.J. Effects of noise exposure on young adults with normal audiograms II: Behavioral measures. Hear. Res. 2017, 356, 74–86. [Google Scholar] [CrossRef]
  21. Grant, K.J.; Mepani, A.M.; Wu, P.; Hancock, K.E.; de Gruttola, V.; Liberman, M.C.; Maison, S.F. Electrophysiological markers of cochlear function correlate with hearing-in-noise performance among audiometrically normal subjects. J. Neurophysiol. 2020, 124, 418–431. [Google Scholar] [CrossRef]
  22. Parker, M.A. Identifying three otopathologies in humans. Hear. Res. 2020, 398, 108079. [Google Scholar] [CrossRef]
  23. Bramhall, N.F.; McMillan, G.P.; Mashburn, A.N. Subclinical Auditory Dysfunction: Relationship Between Distortion Product Otoacoustic Emissions and the Audiogram. Am. J. Audiol. 2021, 30, 854–869. [Google Scholar] [CrossRef]
  24. Bharadwaj, H.M.; Mai, A.R.; Simpson, J.M.; Choi, I.; Heinz, M.G.; Shinn-Cunningham, B.G. Non-Invasive Assays of Cochlear Synaptopathy—Candidates and Considerations. Neuroscience 2019, 407, 53–66. [Google Scholar] [CrossRef]
  25. Bramhall, N.; Beach, E.F.; Epp, B.; Le Prell, C.G.; Lopez-Poveda, E.A.; Plack, C.J.; Schaette, R.; Verhulst, S.; Canlon, B. The search for noise-induced cochlear synaptopathy in humans: Mission impossible? Hear. Res. 2019, 377, 88–103. [Google Scholar] [CrossRef]
  26. Le Prell, C.G.; Hammill, T.L.; Murphy, W.J. Noise-induced hearing loss and its prevention: Integration of data from animal models and human clinical trials. J. Acoust. Soc. Am. 2019, 146, 4051–4074. [Google Scholar] [CrossRef]
  27. Prendergast, G.; Guest, H.; Munro, K.J.; Kluk, K.; Léger, A.; Hall, D.A.; Heinz, M.G.; Plack, C.J. Effects of noise exposure on young adults with normal audiograms I: Electrophysiology. Hear. Res. 2017, 344, 68–81. [Google Scholar] [CrossRef]
  28. Guest, H.; Dewey, R.S.; Plack, C.J.; Couth, S.; Prendergast, G.; Bakay, W.; Hall, D.A. The Noise Exposure Structured Interview (NESI): An Instrument for the Comprehensive Estimation of Lifetime Noise Exposure. Trends Hear. 2018, 22, 2331216518803213. [Google Scholar] [CrossRef]
  29. Liberman, M.C.; Epstein, M.J.; Cleveland, S.S.; Wang, H.; Maison, S.F. Toward a Differential Diagnosis of Hidden Hearing Loss in Humans. PLoS ONE 2016, 11, e0162726. [Google Scholar] [CrossRef]
  30. Bramhall, N.F.; Konrad-Martin, D.; McMillan, G.P.; Griest, S.E. Auditory Brainstem Response Altered in Humans With Noise Exposure Despite Normal Outer Hair Cell Function. Ear Hear. 2017, 38, e1–e12. [Google Scholar] [CrossRef]
  31. Kumar, U.A.; Ameenudin, S.; Sangamanatha, A.V. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise. Noise Health 2012, 14, 100–105. [Google Scholar] [CrossRef]
  32. Yeend, I.; Beach, E.F.; Sharma, M.; Dillon, H. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise. Hear. Res. 2017, 353, 224–236. [Google Scholar] [CrossRef]
  33. Balan, J.R.; Rodrigo, H.; Saxena, U.; Mishra, S.K. Explainable machine learning reveals the relationship between hearing thresholds and speech-in-noise recognition in listeners with normal audiograms. J. Acoust. Soc. Am. 2023, 154, 2278–2288. [Google Scholar] [CrossRef]
  34. Kim, H.; Park, J.; Choung, Y.-H.; Jang, J.H.; Ko, J. Predicting speech discrimination scores from pure-tone thresholds—A machine learning-based approach using data from 12,697 subjects. PLoS ONE 2021, 16, e0261433. [Google Scholar] [CrossRef]
  35. International Bureau for Audiophonology. Audiometric Classification of Hearing Impairments; International Bureau for Audiophonology recommendation 02/1; International Bureau for Audiophonology: Liege, Belgium, 1997. [Google Scholar]
  36. Gatehouse, S.; Noble, W. The Speech, Spatial and Qualities of Hearing Scale (SSQ). Int. J. Audiol. 2004, 43, 85–99. [Google Scholar] [CrossRef]
  37. Moulin, A.; Pauzie, A.; Richard, C. Validation of a French translation of the Speech, Spatial, and Qualities of Hearing Scale (SSQ) and comparison with other language versions. Int. J. Audiol. 2015, 54, 889–898. [Google Scholar] [CrossRef]
  38. Jansen, S.; Luts, H.; Wagener, K.C.; Kollmeier, B.; Del Rio, M.; Dauman, R.; James, C.; Fraysse, B.; Vormès, E.; Frachet, B.; et al. Comparison of three types of French speech-in-noise tests: A multi-center study. Int. J. Audiol. 2012, 51, 164–173. [Google Scholar] [CrossRef]
  39. HörTech gGmbH. Instruction Manual “French Matrix Test” FRAMATRIX for Oldenburg Measurement Applications from release 1.5.4.0 2014. Available online: https://download.hz-ol.de/OMA (accessed on 1 April 2024).
  40. Moulin, A.; Richard, C. Validation of a French-Language Version of the Spatial Hearing Questionnaire, Cluster Analysis and Comparison with the Speech, Spatial, and Qualities of Hearing Scale. Ear Hear. 2016, 37, 412–423. [Google Scholar] [CrossRef]
  41. Akeroyd, M.A.; Guy, F.H.; Harrison, D.L.; Suller, S.L. A factor analysis of the SSQ (Speech, Spatial, and Qualities of Hearing Scale). Int. J. Audiol. 2014, 53, 101–114. [Google Scholar] [CrossRef]
  42. Genuer, R.; Poggi, J.-M. Random Forests. In Genuer R, Poggi J-M, editors. Random Forests with R; Springer International Publishing: Cham, Switzerland, 2020; pp. 33–55. [Google Scholar] [CrossRef]
  43. Lenatti, M.; Moreno-Sánchez Pedro, A.; Polo, E.M.; Mollura, M.; Barbieri, R.; Paglialonga, A. Evaluation of Machine Learning Algorithms and Explainability Techniques to Detect Hearing Loss From a Speech-in-Noise Screening Test. Am. J. Audiol. 2022, 31, 961–979. [Google Scholar] [CrossRef]
  44. Moulin, A.; Bernard, A.; Tordella, L.; Vergne, J.; Gisbert, A.; Martin, C.; Richard, C. Variability of word discrimination scores in clinical practice and consequences on their sensitivity to hearing loss. Eur. Arch. Oto-Rhino-Laryngol. 2017, 274, 2117–2124. [Google Scholar] [CrossRef]
  45. Moulin, A.; Richard, C. Lexical Influences on Spoken Spondaic Word Recognition in Hearing-Impaired Patients. Front. Neurosci. 2015, 9, 170662. [Google Scholar] [CrossRef]
  46. Stenbäck, V.; Marsja, E.; Ellis, R.; Rönnberg, J. Relationships between behavioural and self-report measures in speech recognition in noise. Int. J. Audiol. 2023, 62, 101–109. [Google Scholar] [CrossRef]
  47. Hällgren, M.; Larsby, B.; Arlinger, S. A Swedish version of the Hearing In Noise Test (HINT) for measurement of speech recognition: Una versión sueca de la Prueba de Audición en Ruido (HINT) para evaluar el reconocimiento del lenguaje. Int. J. Audiol. 2006, 45, 227–237. [Google Scholar] [CrossRef]
  48. Hagerman, B. Sentences for Testing Speech Intelligibility in Noise. Scand. Audiol. 1982, 11, 79–87. [Google Scholar] [CrossRef]
  49. Yeend, I.; Beach, E.F.; Sharma, M. Working Memory and Extended High-Frequency Hearing in Adults: Diagnostic Predictors of Speech-in-Noise Perception. Ear Hear. 2019, 40, 458–467. [Google Scholar] [CrossRef]
  50. Motlagh Zadeh, L.; Silbert, N.H.; Sternasty, K.; Swanepoel, D.W.; Hunter, L.L.; Moore, D.R. Extended high-frequency hearing enhances speech perception in noise. Proc. Natl. Acad. Sci. USA 2019, 116, 23753–23759. [Google Scholar] [CrossRef]
  51. Monson, B.B.; Hunter, E.J.; Lotto, A.J.; Story, B.H. The perceptual significance of high-frequency energy in the human voice. Front. Psychol. 2014, 5, 91153. [Google Scholar] [CrossRef]
  52. Fettiplace, R.; Nam, J.-H. Tonotopy in calcium homeostasis and vulnerability of cochlear hair cells. Hear. Res. 2019, 376, 11–21. [Google Scholar] [CrossRef]
  53. Furman, A.C.; Kujawa, S.G.; Liberman, M.C. Noise-induced cochlear neuropathy is selective for fibers with low spontaneous rates. J. Neurophysiol. 2013, 110, 577–586. [Google Scholar] [CrossRef]
  54. Gülseven Güven, S.; Binay, Ç. The Importance of Extended High Frequencies in Hearing Evaluation of Pediatric Patients with Type 1 Diabetes. J. Clin. Res. Pediatr. Endocrinol. 2023, 15, 127–137. [Google Scholar] [CrossRef]
  55. Cunningham, D.R.; Vise, L.K.; Jones, L.A. Influence of Cigarette Smoking on Extra-High-Frequency Auditory Thresholds. Ear Hear. 1983, 4, 162. [Google Scholar] [CrossRef]
  56. Bharadwaj, H.M.; Verhulst, S.; Shaheen, L.; Liberman, M.C.; Shinn-Cunningham, B.G. Cochlear neuropathy and the coding of supra-threshold sound. Front. Syst. Neurosci. 2014, 8, 26. [Google Scholar] [CrossRef]
  57. Parthasarathy, A.; Kujawa, S.G. Synaptopathy in the aging cochlea: Characterizing early-neural deficits in auditory temporal envelope processing. J. Neurosci. 2018, 38, 7108–7119. [Google Scholar] [CrossRef]
  58. Lorenzi, C.; Gilbert, G.; Carn, H.; Garnier, S.; Moore, B.C.J. Speech perception problems of the hearing impaired reflect inability to use temporal fine structure. Proc. Natl. Acad. Sci. USA 2006, 103, 18866–18869. [Google Scholar] [CrossRef]
  59. Grose, J.H.; Mamo, S.K. Processing of temporal fine structure as a function of age. Ear Hear. 2010, 31, 755–760. [Google Scholar] [CrossRef]
  60. Wu, P.Z.; Liberman, L.D.; Bennett, K.; de Gruttola, V.; O’Malley, J.T.; Liberman, M.C. Primary Neural Degeneration in the Human Cochlea: Evidence for Hidden Hearing Loss in the Aging Ear. Neuroscience 2019, 407, 8–20. [Google Scholar] [CrossRef]
  61. Paraouty, N.; Ewert, S.D.; Wallaert, N.; Lorenzi, C. Interactions between amplitude modulation and frequency modulation processing: Effects of age and hearing loss. J. Acoust. Soc. Am. 2016, 140, 121. [Google Scholar] [CrossRef]
  62. Arlinger, S.; Lunner, T.; Lyxell, B.; Pichora-Fuller, M.K. The emergence of cognitive hearing science. Scand. J. Psychol. 2009, 50, 371–384. [Google Scholar] [CrossRef]
  63. Rönnberg, J.; Signoret, C.; Andin, J.; Holmer, E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front. Psychol. 2022, 13, 967260. [Google Scholar] [CrossRef]
  64. Klaassen, E.B.; Evers, E.A.T.; de Groot, R.H.M.; Backes, W.H.; Veltman, D.J.; Jolles, J. Working memory in middle-aged males: Age-related brain activation changes and cognitive fatigue effects. Biol. Psychol. 2014, 96, 134–143. [Google Scholar] [CrossRef]
  65. Füllgrabe, C.; Rosen, S. On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds. Front. Psychol. 2016, 7, 1268. [Google Scholar] [CrossRef] [PubMed]
  66. Helfer, K.S.; Merchant, G.R.; Wasiuk, P.A. Age-Related Changes in Objective and Subjective Speech Perception in Complex Listening Environments. J. Speech Lang. Hear. Res. 2017, 60, 3009–3018. [Google Scholar] [CrossRef] [PubMed]
  67. Bramhall, N.; Ong, B.; Ko, J.; Parker, M. Speech Perception Ability in Noise is Correlated with Auditory Brainstem Response Wave I Amplitude. J. Am. Acad. Audiol. 2015, 26, 509–517. [Google Scholar] [CrossRef]
  68. Johannesen, P.T.; Buzo, B.C.; Lopez-Poveda, E.A. Evidence for age-related cochlear synaptopathy in humans unconnected to speech-in-noise intelligibility deficits. Hear. Res. 2019, 374, 35–48. [Google Scholar] [CrossRef] [PubMed]
  69. Giraudet, F.; Labanca, L.; Souchal, M.; Avan, P. Decreased Reemerging Auditory Brainstem Responses Under Ipsilateral Broadband Masking as a Marker of Noise-Induced Cochlear Synaptopathy. Ear Hear. 2021, 42, 1062–1071. [Google Scholar] [CrossRef] [PubMed]
  70. Gorga, M.P.; Neely, S.T.; Ohlrich, B.; Hoover, B.; Redner, J.; Peters, J. From laboratory to clinic: A large scale study of distortion product otoacoustic emissions in ears with normal hearing and ears with hearing loss. Ear Hear. 1997, 18, 440–455. [Google Scholar] [CrossRef]
  71. Moore, D.; Hunter, L.; Munro, K. Benefits of Extended High-Frequency Audiometry for Everyone. Hear. J. 2017, 70, 50. [Google Scholar] [CrossRef]
Figure 1. Distribution of the age of participants. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Each dot shows the age of one participant.
Figure 1. Distribution of the age of participants. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range. Each dot shows the age of one participant.
Biology 13 00416 g001
Figure 2. Exterior view of the mobile hearing laboratory.
Figure 2. Exterior view of the mobile hearing laboratory.
Biology 13 00416 g002
Figure 3. Interior view of the mobile hearing laboratory. In the center right of the setup, a video screen displays images of participants situated in the four booths. Positioned in the center left are four portable “follower” computers equipped with fold-down screens, to which the screens, keyboards, and mice of each booth are connected. Beneath these computers, there is the “leader” computer, positioned at the bottom center, with its screen visible. Additionally, the screens corresponding to the “follower” computers are also visible.
Figure 3. Interior view of the mobile hearing laboratory. In the center right of the setup, a video screen displays images of participants situated in the four booths. Positioned in the center left are four portable “follower” computers equipped with fold-down screens, to which the screens, keyboards, and mice of each booth are connected. Beneath these computers, there is the “leader” computer, positioned at the bottom center, with its screen visible. Additionally, the screens corresponding to the “follower” computers are also visible.
Biology 13 00416 g003
Figure 4. Audiometric thresholds as a function of frequency for left and right ear (N = 70). The black line shows the median, the gray area shows the interquartile range.
Figure 4. Audiometric thresholds as a function of frequency for left and right ear (N = 70). The black line shows the median, the gray area shows the interquartile range.
Biology 13 00416 g004
Figure 5. The performance for each speech-in-noise audiometry test; each dot shows the result of one participant. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range.
Figure 5. The performance for each speech-in-noise audiometry test; each dot shows the result of one participant. The boxplots show the medial (horizontal bar) and the interquartile range (box). The whiskers reach from the lowest to the highest observed value within 1.5 times the interquartile range.
Biology 13 00416 g005
Figure 6. Main predictors of the consonant identification score. The importance is measured as the MSE increase for the nine first most important variables. The larger the value, the more important the variable. See Table 1 for abbreviations.
Figure 6. Main predictors of the consonant identification score. The importance is measured as the MSE increase for the nine first most important variables. The larger the value, the more important the variable. See Table 1 for abbreviations.
Biology 13 00416 g006
Figure 7. Scatter plots of the nine most important predictors of the consonant identification score. (A). Right ear EHF threshold. (B). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (C). Left ear 8000Hz threshold. (D). Left ear pure tone average. (E). Left ear wave I amplitude at 80 dB nHL. (F). Years of motorcycling. (G). Best ear pure tone average. (H). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (I). Left ear EHF threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 7. Scatter plots of the nine most important predictors of the consonant identification score. (A). Right ear EHF threshold. (B). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (C). Left ear 8000Hz threshold. (D). Left ear pure tone average. (E). Left ear wave I amplitude at 80 dB nHL. (F). Years of motorcycling. (G). Best ear pure tone average. (H). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (I). Left ear EHF threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Biology 13 00416 g007
Figure 8. Word in noise recognition. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model. See Table 1 for abbreviations.
Figure 8. Word in noise recognition. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model. See Table 1 for abbreviations.
Biology 13 00416 g008
Figure 9. Scatter plots of the nine most important predictors of the word in noise recognition threshold. (A). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (B). Right ear EHF threshold. (C). History of hearing pathology. (D). Left ear 1000 Hz threshold. Pure tone average. (E). Left ear EHF threshold. (F). Years of motorcycling. (G). Left ear DPOAE at 2000 Hz. (H). Left ear DPOAE at 1000 Hz. (I). Left ear 2000 Hz threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 9. Scatter plots of the nine most important predictors of the word in noise recognition threshold. (A). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (B). Right ear EHF threshold. (C). History of hearing pathology. (D). Left ear 1000 Hz threshold. Pure tone average. (E). Left ear EHF threshold. (F). Years of motorcycling. (G). Left ear DPOAE at 2000 Hz. (H). Left ear DPOAE at 1000 Hz. (I). Left ear 2000 Hz threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Biology 13 00416 g009
Figure 10. French matrix test. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model.
Figure 10. French matrix test. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model.
Biology 13 00416 g010
Figure 11. Scatter plots of the nine most important predictors of the FrMatrix. (A). Right ear EHF threshold. (B). Years of motorcycling. (C). Right ear DPOAE at 3000 Hz. (D). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (E). History of hearing pathology. (F). Right ear 4000 Hz threshold. (G). Right ear 125 Hz threshold. (H). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (I). Left ear EHF threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 11. Scatter plots of the nine most important predictors of the FrMatrix. (A). Right ear EHF threshold. (B). Years of motorcycling. (C). Right ear DPOAE at 3000 Hz. (D). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (E). History of hearing pathology. (F). Right ear 4000 Hz threshold. (G). Right ear 125 Hz threshold. (H). Amplitude modulation detection threshold at 60 dB SL at 500 Hz. (I). Left ear EHF threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Biology 13 00416 g011
Figure 12. Speech-in-noise pragmatic scale. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model.
Figure 12. Speech-in-noise pragmatic scale. Importance measured as the increase in the mean square error for the nine most important variables. The larger the value is, the more important the variable in the model.
Biology 13 00416 g012
Figure 13. Scatter plots of the nine most important predictors of speech-in-noise pragmatic scale. (A). Years of motorcycling. (B). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (C). Right ear pure tone average. (D). Best ear pure tone average. (E). Right ear 8000 Hz threshold. (F). Left ear EHF threshold. (G). Left ear DPOAE at 3000 Hz. (H). Age. (I). Right ear 1000 Hz threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 13. Scatter plots of the nine most important predictors of speech-in-noise pragmatic scale. (A). Years of motorcycling. (B). Frequency modulation detection threshold at 60 dB SL at 500 Hz. (C). Right ear pure tone average. (D). Best ear pure tone average. (E). Right ear 8000 Hz threshold. (F). Left ear EHF threshold. (G). Left ear DPOAE at 3000 Hz. (H). Age. (I). Right ear 1000 Hz threshold. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Biology 13 00416 g013
Figure 14. Scatter plots showing the correlations between the three speech-in-noise tests: (A). Consonant identification vs. French matrix test. (B). Words-in-noise recognition vs. French matrix test. (C). Consonant identification vs. words-in-noise recognition. The blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 14. Scatter plots showing the correlations between the three speech-in-noise tests: (A). Consonant identification vs. French matrix test. (B). Words-in-noise recognition vs. French matrix test. (C). Consonant identification vs. words-in-noise recognition. The blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Biology 13 00416 g014
Figure 15. Scatter plots showing the correlations between the speech-in-noise pragmatic scale and the three speech-in-noise tests. (A). Consonant identification vs Speech-in-noise pragmatic scale (B). Words-in-noise recognition vs. Speech-in-noise pragmatic scale. (C). French matrix test vs. Words-in-noise recognition. The blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Figure 15. Scatter plots showing the correlations between the speech-in-noise pragmatic scale and the three speech-in-noise tests. (A). Consonant identification vs Speech-in-noise pragmatic scale (B). Words-in-noise recognition vs. Speech-in-noise pragmatic scale. (C). French matrix test vs. Words-in-noise recognition. The blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line. In each panel, the Spearman coefficient of correlation, its p-value, and the sample size are shown. When the correlation is significant (p < 0.05), a blue line indicates a linear fit. The gray region indicates the 95% confidence interval of the regression line.
Biology 13 00416 g015
Table 1. Sample size of each combination of variable and conditions of the dataset. AMDT: detection threshold of AM, FMDT: detection threshold of FM, DPOAE: distortion products of otoacoustic emission.
Table 1. Sample size of each combination of variable and conditions of the dataset. AMDT: detection threshold of AM, FMDT: detection threshold of FM, DPOAE: distortion products of otoacoustic emission.
TestConditionsNAbbreviation
Consonant identification 61
Word in noise recognition 56
French matrix test 69FrMatrix
Age 70Age
History of hearing pathology 70History_of_Hearing_Pathology
Years of motorcycling 70Years_of_Motocycling
AMDT60 dB SL 4000 Hz42AMDT_60 dB_4000 Hz
60 dB SL 500 Hz55AMDT_60 dB_500 Hz
10 dB SL 4000 Hz55AMDT_10 dB_4000 Hz
10 dB SL 500 Hz55AMDT_10 dB_500 Hz
FMDT60 dB SL 4000 Hz22FMDT_60 dB_4000 Hz
60 dB SL 500 Hz61FMDT_60 dB_500 Hz
10 dB SL 4000 Hz23FMDT_10 dB_4000 Hz
10 dB SL 500 Hz45FMDT_10 dB_500 Hz
60 dB SL 4000 Hz Ability 60FMDT_60 dB_4000 Hz_Ab
10 dB SL 4000 Hz Ability 61FMDT_10 dB_4000 Hz_Ab
10 dB SL 500 Hz Ability 61FMDT_10 dB_500 Hz_Ab
DPOAELeft Ear 1000 Hz59LE_DPOAE_1000 Hz
Left Ear 1500 Hz62LE_DPOAE_1500 Hz
Left Ear 2000 Hz62LE_DPOAE_2000 Hz
Left Ear 3000 Hz62LE_DPOAE_3000 Hz
Left Ear 4000 Hz62LE_DPOAE_4000 Hz
Left Ear 5000 Hz58LE_DPOAE_5000 Hz
Right Ear 1000 Hz65RE_DPOAE_1000 Hz
Right Ear 1500 Hz63RE_DPOAE_1500 Hz
Right Ear 2000 Hz65RE_DPOAE_2000 Hz
Right Ear 3000 Hz65RE_DPOAE_3000 Hz
Right Ear 4000 Hz65RE_DPOAE_4000 Hz
Right Ear 5000 Hz61RE_DPOAE_5000 Hz
Tonal audiometryLeft Ear 125 Hz70LE_125 Hz
Left Ear 250 Hz70LE_250 Hz
Left Ear 500 Hz70LE_500 Hz
Left Ear 1000 Hz70LE_1000 Hz
Left Ear 2000 Hz70LE_2000 Hz
Left Ear 4000 Hz70LE_4000 Hz
Left Ear 8000 Hz70LE_8000 Hz
Left Ear EHF70LE_EHF
Left Ear PTA70LE_PTA
Right Ear 125 Hz70RE_125 Hz
Right Ear 250 Hz70RE_250 Hz
Right Ear 500 Hz70RE_500 Hz
Right Ear 1000 Hz70RE_1000 Hz
Right Ear 2000 Hz70RE_2000 Hz
Right Ear 4000 Hz70RE_4000 Hz
Right Ear 8000 Hz70RE_8000 Hz
Right Ear EHF70RE_EHF
Right Ear PTA70RE_PTA
Best Ear PTA70Best_Ear_PTA
ElectrocochleographyLeft Ear Wave I 80 dB HL37LE_WaveI_80 dB
Left Ear Wave I 90 dB HL38LE_WaveI_90 dB
Right Ear Wave I 80 dB HL49RE_WaveI_80 dB
Right Ear Wave I 90 dB HL52LE_WaveI_90 dB
Left Ear Wave I Slope34LE_Slope
Right Ear Wave I Slope46RE_Slope
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Andéol, G.; Paraouty, N.; Giraudet, F.; Wallaert, N.; Isnard, V.; Moulin, A.; Suied, C. Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals. Biology 2024, 13, 416. https://doi.org/10.3390/biology13060416

AMA Style

Andéol G, Paraouty N, Giraudet F, Wallaert N, Isnard V, Moulin A, Suied C. Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals. Biology. 2024; 13(6):416. https://doi.org/10.3390/biology13060416

Chicago/Turabian Style

Andéol, Guillaume, Nihaad Paraouty, Fabrice Giraudet, Nicolas Wallaert, Vincent Isnard, Annie Moulin, and Clara Suied. 2024. "Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals" Biology 13, no. 6: 416. https://doi.org/10.3390/biology13060416

APA Style

Andéol, G., Paraouty, N., Giraudet, F., Wallaert, N., Isnard, V., Moulin, A., & Suied, C. (2024). Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals. Biology, 13(6), 416. https://doi.org/10.3390/biology13060416

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop