Next Article in Journal
ADV at the Time of COVID-19 Brain Effect between Emotional Engagement and Purchase Intention
Next Article in Special Issue
Disconjugate Eye Movements in Dyslexic Adolescents While Viewing Op Art: A Creative Handicap?
Previous Article in Journal
Hormones and Vestibular Disorders: The Quest for Biomarkers
Previous Article in Special Issue
Presbycusis and the Aging of Eye Movement: Common Attention Mechanisms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test

1
IRIS Laboratory, Neurophysiology of Binocular Motor Control and Vision, CNRS UAR 2022, University of Paris, 45 rue des Saints-Pères, 75006 Paris, France
2
Orasis-Eye Analytics and Rehabilitation, 45, Rue des Saints-Pères, 75006 Paris, France
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(5), 591; https://doi.org/10.3390/brainsci12050591
Submission received: 21 February 2022 / Revised: 23 April 2022 / Accepted: 28 April 2022 / Published: 3 May 2022
(This article belongs to the Special Issue Eye Movements to Evaluate and Treat Attention Deficits)

Abstract

:
Multisensory integration is a capacity allowing us to merge information from different sensory modalities in order to improve the salience of the signal. Audiovisual integration is one of the most used kinds of multisensory integration, as vision and hearing are two senses used very frequently in humans. However, the literature regarding age-related hearing loss (presbycusis) on audiovisual integration abilities is almost nonexistent, despite the growing prevalence of presbycusis in the population. In that context, the study aims to assess the relationship between presbycusis and audiovisual integration using tests of saccade and vergence eye movements to visual vs. audiovisual targets, with a pure tone as an auditory signal. Tests were run with the REMOBI and AIDEAL technologies coupled with the pupil core eye tracker. Hearing abilities, eye movement characteristics (latency, peak velocity, average velocity, amplitude) for saccade and vergence eye movements, and the Stroop Victoria test were measured in 69 elderly and 30 young participants. The results indicated (i) a dual pattern of aging effect on audiovisual integration for convergence (a decrease in the aged group relative to the young one, but an increase with age within the elderly group) and (ii) an improvement of audiovisual integration for saccades for people with presbycusis associated with lower scores of selective attention in the Stroop test, regardless of age. These results bring new insight on an unknown topic, that of audio visuomotor integration in normal aging and in presbycusis. They highlight the potential interest of using eye movement targets in the 3D space and pure tone sound to objectively evaluate audio visuomotor integration capacities.

1. Introduction

Presbycusis is an important health issue, even more so in the context of the global aging of the population. In 2011, a study in the US [1] found that 63% of the population above 70 years have at least slight hearing loss. The well-known consequences of presbycusis are an increase in the hearing threshold, a poorer-frequency resolution, and comprehension in silence and in noise [2]. Nevertheless, other consequences of presbycusis have been studied over the last two decades, notably showing associations with dementia [3,4], depression [5], and cognitive deterioration [3,6,7,8,9], independently of age. Thus, the consequences of presbycusis appear to a much greater extent than just hearing issues and seem to affect more central cortical aspects.
Multisensory integration (MI) is defined as an interactive synergy among the senses [10]: Different sensory modalities, with respect to a certain proximity in time and space [11,12], are integrated into the same neuron population. It provides a better perception of a multimodal stimulus if its sensory modalities are congruent. Effective multisensory integration processing relies on the proper functioning of (i) the cortical and subcortical circuits on which it depends (top-down processes) and (ii) the various peripheral sensory receptors and their neural paths (bottom-up processes). Among the cognitive factors affecting MI are, notably, semantic congruence [13] and selective attention [14,15].
Audiovisual integration (AVI) is probably one of the most-used kinds of multisensory integration. However, there is currently little knowledge concerning the effect of presbycusis on audiovisual integration. Studies on the influence of hearing loss on MI deal mostly with profound, long-standing hearing loss (as opposed to presbycusis, which is a more slight and recent loss of hearing). Two review studies report evidence of multisensory reorganization with old and profound hearing loss, including the activation of neurons in the auditory cortex for non-auditory information [16,17]. There is also substantial literature on the multisensory consequences for persons with cochlear implants, showing that audiovisual integration in these populations has been strongly impacted by multisensory cortical reorganization [18,19,20].
The few existing studies dealing with the behavioral effect of presbycusis on audiovisual integration use various paradigms. Some of them assess audiovisual speech performances [21,22], while others use a multisensorial distractor paradigm [23] or the McGurk illusion, which consists of presenting incongruent audio-visual syllables leading to an illusionary percept (e.g., an auditory “ba” and a visual “ga” lead to the perception of the “da” sound) [24,25]. Although their results are contradictory (see discussion), they would indicate multisensory consequences of presbycusis. This is supported by recent electrophysiological studies showing that multisensory cortical reorganization in the auditory cortex is also possible following mild to moderate hearing loss [26,27].
In view of the limited number of studies, it is difficult to appreciate the potential effect of presbycusis on audiovisual integration. A plausible hypothesis would be the degradation of audiovisual integration with presbycusis. Indeed, as mentioned above, audiovisual integration is related to the proper functioning of hearing but also to certain cognitive processes, and presbycusis is associated with hearing and cognitive decline. However, in relation to that topic, important literature on the effect of age on MI depicts a more complex mechanism than it seems. Recent reviews indicate that multisensory benefits would be greater for older populations than for younger populations [12,28,29,30,31]. Several hypotheses have been formulated to explain the mechanisms behind this improvement of MI with age [28], but the evidence is still lacking in favor of one or the other hypotheses. Thus, on the one hand, presbycusis could degrade audiovisual integration, and on the other hand, one could expect an improvement as a compensatory mechanism.
Given the importance of audiovisual integration and the prevalence of presbycusis, learning more about these two phenomena is of high importance, and research on this topic might also extend our knowledge of aging. The purpose of this study is to develop this issue using oculomotricity. The measurement of eye movements, especially via saccade latencies, is a good tool to evaluate multisensory effects. A well-known beneficial effect of combining matching visual and auditory modalities is the reduction of the reaction time. It has been proven for manual reaction time [32] as well as for the latency of saccade eye movements [33,34,35,36,37,38,39]. However, while saccade latency is the measure that has been used most in this field, it is also interesting to use other oculomotor features and other types of eye movement. Studies have also shown an improvement in saccade accuracy with audiovisual targets [37,39]. Most importantly, Kapoula et al. [40] were the first to study the differential effect of sound on saccades and vergence eye movements in a multiple-case study. They reported a decrease in the saccade latency but an increase in vergence velocity when comparing audiovisual vs. visual targets. Thus, it is possible that the sound acts differently for the two types of eye movements. To what extent this depends on the type of population studied, namely on age, is not yet known.
Thus, the main focus of this study is to investigate the relation between presbycusis, age, and audiovisual integration by assessing the programming and execution of saccades and vergence eye movements toward visual and audiovisual targets. We will study these relationships both in young and elderly populations compared to each other, but also within the elderly population alone, which is the only one presenting presbycusis. The aim is to better understand how the brain handles audiovisual integration when hearing capacities are becoming progressively weaker for an elderly population. Would audiovisual integration be degraded in relatively recent and age-related mild hearing loss? How would it evolve with age? Would it deteriorate because of the sensory loss, or oppositely, would it improve thanks to a compensatory mechanism?
We also decided to evaluate the relationship between the Stroop test and audiovisual integration. The Stroop test [41] is a gold standard to assess the selective attention and inhibition capacities, which are degraded with age [42,43,44,45]. This test allows us to characterize the cognitive abilities of our elderly population. Furthermore, it is relevant to associate it with audiovisual integration, as selective attention has been identified among the top-down cognitive processes that can modulate MI. More precisely, restricted attention to a particular sensory modality diminishes the multisensory enhancement given by the addition of another congruent sensory modality [14,15,46], and adults with attention deficits are more easily distracted by incongruent sensory stimuli [47,48,49].

2. Materials and Methods

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee “Ile de France II” (N° ID RCB: 019-A02602-55, approved the 3 October 2020).

2.1. Participants

An elderly group (EG) and a young group (YG) were tested. The young group was composed of 30 participants aged between 21 and 30 years (mean 25.3 ± 2.68). They were mostly students working in neighboring laboratories. The elderly group was composed of 69 participants, aged between 51 and 84 years (mean 66.7 + 8.4). They were essentially recruited on the RISC (relai d’information des sciences cognitives, France) platform of the CNRS and were autonomous. We excluded from the study persons with hearing loss wearing hearing aids, conductive deafness, genetic deafness, who were frequently exposed to loud noises or took ototoxic drugs, persons with visual pathologies (e.g., AMD, non-operated cataract, or glaucoma), persons with ocular motor abnormalities (e.g., strabismus, ptosis, etc.), and persons taking drugs likely to affect sensory and motor functions. All this information is verbally requested from the participants. Among the elderly participants, 5% were treated for diabetes, 17% were treated for blood pressure issues, none had renal failure, and 14% had vascular issues (60% of which were treated). Thus, the participants of the elderly group are considered to be a normal, representative aging population. Moreover, the exclusion criteria applied ensured that the potential hearing losses observed in the old group are due to age. Informed consent was obtained from all of the participants after the nature of the procedure had been explained.

2.2. Hearing Tests

A professional audiometrist was in charge of performing two kinds of audiometry, in a sound booth calibrated cabin, with an audiometer of the brand Interacoustics (Middelfart, Danemark),model AD639. The outer ear canals of all the participants were all checked before the hearing tests. Hearing tests were performed in 62 of the 69 elderly participants and 19 of the 30 young participants; as the audiometry tests were performed outside of the laboratory, some of the participants were no longer available for this second session.

2.2.1. Tonal Audiometry

This test aims to assess the audibility, i.e., the minimum intensity required for a sound to be heard. It was realized, for each ear separately, with the headset TDH-39P. For each ear, we determined the lowest intensity in dB HL needed by the participant to detect the following pure tones: 250, 500, 750, 1000, 2000, 3000, and 4000 Hz. The final score is called the Pure Tone Average (PTA) and represents, for each ear, the mean of all these thresholds. Only the best PTA between the two ears was retained to match the hearing-loss definition of the World Health Organization (WHO). WHO considers hearing loss to be when the best PTA of the two ears is above 20 dB HL [50].

2.2.2. Vocal Audiometry in Silence

This test aims to assess speech comprehension in silence. Word lists with differing intensities are sent to the participant, enabling us to calculate a comprehension score for a given intensity. It was performed with a loudspeaker situated at 1 m in front of the participant. Thus, the two ears were tested simultaneously. The different intensities with which the lists were sent were 70, 60, 50, 40, 30, 20, or 10 dB SPL. The lists used were the Lafon cochlear lists, composed of 17 monosyllabic words of 3 phonemes (51 phonemes) [51]. The comprehension score for each list was calculated with the percentage of recognized phonemes in the list. The final score is called the SRT50 (Speech Recognition Threshold 50%) and represents the intensity required to understand 50% of the phonemes of a list. The SRT50 was estimated by a cross product between the intensities needed for the lists with the score just above and below 50% comprehension.

2.2.3. Vocal Audiometry in Speech

This test aims to assess comprehension among noise. Word lists with a differing Signal to Noise difference (SND) are sent to the participant, enabling us to calculate a comprehension score for a given SND. It was performed with three loudspeakers situated at 1 m distances from the participant: One at their back, one at their right, and one at their left. As in the test for vocal audiometry in silence, the two ears were tested simultaneously. From the two loudspeakers on the side, the word lists were played, and from the loudspeaker at the back, the noise was played. The SND represents the extent to which the speech signal is higher or lower than the noise signal. It is calculated by deducting the intensities in dB SPL of the speech list and the noise (SND = signal intensity-noise intensity). During the entire test, the intensity of the word lists was unchanged, and the SND varied for each new list by changing the intensity of the noise signal. For each participant, the intensity of the word lists was chosen by taking the lower intensity in the vocal audiometry in silence with the best score; for example, if, in the vocal audiometry in silence test, participant A had a recognition score of 100% for the list at 60 dB SPL, 100% for the list at 50 dB SPL, and 82% for the list at 40 dB SPL, then the intensity of the lists for the entire vocal audiometry in noise would be set at 50 dB SPL). The different SND values with which the lists were sent were 0, −5, −10, −15, and −20 dB SPL. As for the vocal audiometry in silence, the lists used were the Lafon cochlear lists. The noise signal used was the “Onde Vocale Globale” (OVG), an incomprehensible babble noise composed of two couples speaking at the same time [52]. The comprehension score for each list was calculated with the percentage of recognized phonemes in the list. The final score is called the SND50 (Signal to Noise Difference 50%) and represents the SND required to understand 50% of the phonemes of a list. The SND50 was in fact estimated by a cross product between the SNDs needed for the lists with a score just above and below 50% comprehension.

2.3. Oculomotor Tests

Divergences, convergences, and left and right saccades were elicited with the REMOBI device (patent US885 1669, WO2011073288), a visio-acoustic device developed by our laboratory (Figure 1).
REMOBI is a plane surface where red LEDs are displayed (a frequency of 626 nm, 180 mCd, and a diameter of 3 mm). Each LED is equipped with a buzzer delivering a 2048 Hz pure tone of 70 dB SPL. Participants were seated, and the REMOBI was placed at their eye level. The instructions given to them were to look at the only LED on, as quickly and accurately as possible, and then to maintain fixation as the LED was still on, without moving the head. Thus, the localization and patterns of LEDs allow for testing of the desired eye movements.
During the saccades test, 20 trials of saccades to the right (RS) and 20 trials of saccades to the left (LS) were elicited, randomly interleaved. For each trial, participants first fixated on a central LED, situated at 70 cm in front of them (the same distance from both eyes). The right and left saccades were elicited by lighting a peripheral LED, also at 70 cm, but at 20° to the right or left from the central LED.
During the vergences test, 20 trials of convergence (C) and 20 trials of divergence (D) were elicited, randomly interleaved. All the LEDs were situated in front of the participant (the same distance for the left and right eyes) and varied only in depth. For each trial, participants first fixated on a central LED, situated 40 cm away. The divergence and convergence were elicited by lighting a peripheral LED, situated at either 20 cm (convergence) or 150 cm (divergence) from the participant.
For saccades and vergences tests, the central LED is switched on during a random time between 1200 and 1800 ms. The peripheral LED is lit for 2000 ms. There is an overlap time of 200 ms where the two LEDs are lit at the same time (overlap paradigm). The trials are separated by a blank period of 300 to 700 ms. The total duration of a sequence is approximately 150 s.

2.4. The Targets Modality

The saccades and vergences tests are passed for the visual paradigm (Paradigm V) and the audiovisual paradigm (Paradigm AV). In the visual paradigm, the LEDs turn on without activation of their adjacent buzzer. In the audiovisual paradigm, an auditory signal is sent with the adjacent buzzer of the LED, 50 ms before the activation of the LED and for a duration of 100 ms. According to prior studies, such a time delta between the auditory and visual signals (50 ms) is the most effective in terms of shortening the eye movement latency inducing both a warning effect and perhaps better localization of the visual target [35,40].

2.5. Eye Movements Analysis

The eye movements are captured with a head-mounted video-oculography device, Pupil Core (Pupil Labs, Berlin, Germany), enabling binocular recording at 200 Hz per eye, using a pupil-tracking mode. The standard Pupil Labs calibration (Pupil Capture) was applied using a target that was presented at eye level with a viewing distance of 1 m. The participant had to fixate on the center of this target and slowly move their head rightward, downward, leftward, and upward, repeating this sequence 3 times [53]. The confidence level was better than 80%. The data acquired are analyzed with AIDEAL software (pending international patent application: PCT/EP2021/062224 7 May 2021).
For saccades, AIDEAL treated the conjugate signal, e.g., the L + R position/2. The amplitude of the saccade movement is measured by defining its onset and offset when the velocity of the movement is above or below 10% of its peak velocity. Practically, this corresponded to values above or below 40°/s (as the peak velocity of 20° saccades is typically above 400°/s). For vergence eye movements, AIDEAL calculates the difference between the two eyes from the individual calibrated eye position signals (i.e., left eye–right eye). The amplitude of vergence is divided into two components, following the double mode control of the vergence model. This model divides the dynamic of vergence into two chronological steps: (i) An initial step of enhanced speed without visual feedback (open-loop) and ii) a sustaining step, slower and driven by visual feedback (closed-loop) [54,55,56]. The initial open-loop component is defined by AIDEAL when the velocity of vergence is above 5°/s. The following closed-loop component is measured by including part of the movement for the next 160 ms. Then different filters on trials are applied. AIDEAL first removed the trials with blinks, then outliers, i.e., values greater than twice the standard deviation. For divergence, 29% ± 12% of trials were excluded for the young participants and 26% ± 20% for the elderly group. For the convergence, 45% ± 14% were excluded for the young participants and 38% ± 17% for the elderly group. For the left saccades, 16% ± 11% were excluded for the young participants and 28% ± 17% for the elderly group. For the right saccades, 9% ± 10% were excluded for the young participants and 24% ± 18% for the elderly group.

2.6. Eye Movements Characteristics Measured

Latency (Lat): Expressed in ms. It represents the time between the activation of the peripheral LED and the initiation of the movement.
Peak Velocity (PVel): Expressed in °/s. It is measured for the total saccade and the initial open-loop component of vergence.
Average Velocity (AVel): Expressed in °/s. It is measured for the total saccade and the total vergence (open-loop component + closed-loop component).
Amplitude (Amp): Expressed in %. It represents the percentage of the amplitude required to reach the peripheral target (20° for saccades, 8.76° for convergence, and 6.5° for divergence).
For each of these characteristics, the variable AVI (AudioVisual Integration) is assigned, representing the variable for the Paradigm AV minus the variable for the Paradigm V: AVI(Lat) = (Lat for AV) − (Lat for V); AVI(PVel) = (PVel for AV) − (PVel for V); AVI(AVel) = (AVel for AV) − (AVel for V); AVI(Amp) = (Amp for AV) − (Amp for V).

2.7. Stroop Test

The original Stroop test was created in 1935 by J.R. STROOP [41], and many variations have been created since. A Stroop test aims to assess inhibition and selective attention capacities.
The French version of Stroop Victoria was used in the current study [57]. It is composed of three parts. In each part, participants enumerate as quickly and accurately as possible the color of 24 items (6 lines of 4 items) presented on a sheet of A4 paper. In the first part, called the Dot condition, items are dots. In the second part, called the Word condition, items are irrelevant words (words with neutral meaning). The items of the third part, called the Interference condition, are color words irrelevant to their ink impression (the word “blue” written in red).
The selective attention and inhibition capacities are evaluated by comparing the performances during the Dot and Interference conditions. The Dot condition assesses the baseline capacity of color recognition and enumeration. The Interference condition also assesses these capacities but with the interference of the words. The information given by the instinctive reading of the words is more intrusive than its color and has to be inhibited by the brain.
The score extracted from the Stroop test is called Stroop_I/D and represents the ratio of the time taken to finish the Dot condition to the time taken to finish the Interference Condition. Higher Stroop_I/D reveals lower capacities of selective attention and inhibition.

2.8. Data Analyses

The relationships between AVI and age are measured with simple linear regressions and correlations: AVI(Lat)~Age, AVI(PVel)~Age, AVI(AVel)~Age, and AVI(Amp)~Age. These results are presented in the results Section 3.2—AVI and Age.
The relationships between AVI and hearing are measured with multiple regression analysis in order to control age and avoid a potential confounding effect: AVI(Lat)~Hearing + Age, AVI(PVel)~Hearing + Age, AVI(AVel)~Hearing + Age, AVI(Amp)~Hearing + Age. Only the results of the relationships between AVI and hearing are presented in the results Section 3.3—AVI and Hearing. Thus, these results will be independent of age.
For the same reason, the relationships between AVI and Stroop are also measured with multiple regression analysis: AVI(Lat)~Stroop + Age, AVI(PVel)~Stroop + Age, AVI(AVel)~Stroop + Age, AVI(Amp)~Stroop + Age. Only the results of the relationships between AVI and Stroop scores, independent of age, are presented in the results Section 3.4—AVI and Stroop.

3. Results

3.1. Characterization of the Elderly Group

Before analyzing the AVI relationships, this part characterizes the elderly participants by comparing their results with those of the younger group and with standard results.
Table 1 groups the means and standard deviations of the oculomotor characteristics for each group. Each line refers to an eye movement (D for divergence, C for convergence, LS for saccades to the left, and RS for saccades to the right); values are shown for visual targets and audiovisual targets. Table 2 shows the group means and standard deviations of the hearing scores for each group of participants.
Results show longer latencies for the elderly and for all types of eye movements. The audiometry scores, PTA, SRT50, and SND50 are better for the elderly than for the young, meaning a lower audibility and speech comprehension for this group. The selective attention evaluated by the Stroop test also reflects lower performance for the elderly. Stroop_I/D is 2.05 ± 0.47 for the elderly vs. 1.61 ± 0.30 for the young. All these results are in agreement with prior studies showing a deterioration of hearing with age [1], an increase in oculomotor latencies [58,59,60,61], and a deterioration in the Stroop test with age [43,45]. Our results show all these three deteriorations occurring in the same elderly participants.
Figure 2 regroups the classification of hearing loss and of Stroop results for elderly participants. The hearing loss classification is according to the WHO scale of hearing loss [50] and the Stroop result classification is according to the model built in the study of Bayard et al. [57].
Figure 2A shows that, according to the WHO scale, 46% of the elderly participants had normal hearing, 45% presented mild hearing loss (PTA of the better ear between 20 and 35 dB HL), 6% presented moderate hearing loss (PTA of the better ear between 35 and 50 dB HL), and 1% presented moderately severe hearing loss (PTA of the better ear between 50 and 65 dB HL). Such prevalence is in the normal range and in agreement with prior studies [1]. Figure 2B shows that, given the classification provided by the French Stroop Victoria test, none of the elderly participants were classifiable as presenting cognitive deficiency (none of the Stroop_I/D scores were in the “deficit” category). To summarize, all the results pointed to a healthy aging population.

3.2. AVI and Age

The correlations and regression lines between the AVI and age are presented in Table 3.
The four rows assess the relationships between the different AVI and age for (top to bottom) divergences, convergences, left saccades, and right saccades. For each row, the first line shows the results for the whole population (young and elderly participants). The second line shows the results for the elderly group alone. The columns indicate the eye movement characteristics measured (Latency, Peak Velocity, Average Velocity, or Amplitude). Thus, for example, the first row and first column result from the linear regression Divergence AVI(Lat)~Age. The values to focus on are the “a”, representing the slope of the regression line. Their significance level is indicated with asterisks: “***” for a p lower than 0.001, “**” for a p between 0.001 and 0.01, “*” for a p between 0.05 and 0.01, and “.” for a p between 0.1 and 0.05. The value “cor” represents the Pearson correlation coefficient.
The results in Table 3 show significant relationships between the AVI for convergences and age. These relationships are also represented in Figure 3. However, they differ in nature depending on the populations studied.
Considering the whole population (young and elderly participants), there is a negative effect of age on audiovisual integration for convergences. This negative effect is reflected in the significant increase in AVI(Lat) for convergences with age (see Figure 3A). In other words, the reduction in convergences latency when adding sound to a visual target is less important for elderly participants than for young participants.
When considering only the elderly participants, there is a positive effect of age on the audiovisual integration for convergences. This positive effect is reflected in a significant decrease in AVI(Lat) with age, and significant increases in AVI(AVel) and AVI(Amp) for convergences with age. In other words, the improvements, related to the addition of sound, in latency, average speed, and amplitude are greater for the older participants of the elderly group.

3.3. AVI and Hearing

The regression lines (execrated from multiple regression analyses with age as explanatory results, see Section 2.8—Data Analyses) between AVI(Lat) and hearing are presented in Table 4.
As for Table 3, in the first row are the results for divergence, the second row for convergence, the third row for left saccades, and the fourth row for right saccades. Each of these rows is divided into two lines: The first one presents the results for the whole population (young and elderly participants) and the second one presents the results for elderly participants alone. The columns indicate the hearing score (PTA, SNR50, or SND50). The values to focus on are “a”, representing the slope of the regression line. Their significance level is indicated with asterisks: “***” for a p lower than 0.001, “**” for a p between 0.001 and 0.01, “*” for a p between 0.05 and 0.01, and “.” for a p between 0.1 and 0.05.
There are significant effects of PTA and SRT50 scores on saccades AVI(Lat), independently of age, considering the elderly participants alone or for the whole population: AVI(Lat) for saccades decreases as PTA or SRT50 increase, as shown in Figure 4 (however this figure shows the simple regression lines, where the age factor is not controlled). In other words, the reduction of saccades latency, when adding sound to a visual target, increase with the age-related decrease in audibility and speech comprehension in silence.
The regression lines (execrated from multiple regression analyses with age as explanatory results, see Section 2.8—Data Analyses) between AVI(Avel), AVI(PVel), AV(Amp), and hearing are presented in the Appendix A. There are no significant relationships for any kind of eye movement considering the whole population or just elderly participants.

3.4. AVI and Stroop

The regression lines (execrated from multiple regression analyses with age as explanatory results, see Section 2.8—Data Analyses) between AVI and hearing are presented in Table 5. The way to read this table is the same as for Table 4, explained in results Section 3.3—AVI and Hearing. The difference is seen in the columns, which represent the different eye movement characteristics (Latency, Peak Velocity, Average Velocity, and Amplitude).
There are no significant effects of Stroop_I/D on AVI(PVel), AVI(AVel), or AVI(Amp), for any kind of eye movement, considering the whole population or just elderly participants.
There is a significant effect of Stroop_I/D on AVI(Lat) for right saccades, and an almost significant effect of Stroop_I/D on AVI(Lat) for left saccades, considering the whole population (young and elderly participants): AVI(Lat) for saccades decreases when Stroop_I/D increases, as shown in Figure 5 (however this figure shows simple regression lines, where the age factor is not controlled). In other words, for the whole population, the reduction in saccades latency when adding sound to the visual target increases with the reduction in inhibition capacities measured with the Stroop.

4. Discussion

The major findings of the study are (i) the complex age effects on audiovisual integration for convergence eye movements only, (ii) the increase in audiovisual integration for saccades with age-related hearing loss, and (iii) the improvement in audiovisual integration for saccades for persons with lower selective attention scores as measured by the Stroop test.

4.1. Conditions for Improved Audiovisual Integration between Young and Elderly

It is interesting to note that our study shows no improvement in audiovisual integration between the young group and the elderly group. Indeed, most previous studies dealing with the effect of age on audiovisual integration for reaction times show an improvement in the audiovisual benefits in seniors compared to young adults, whether they assessed a color-discrimination task [30] or a simple reaction task [12,31,62,63]. However, studies using spatial discrimination tasks (e.g., follow the orientation of an arrow, look at a location) have more mixed results. The study of Diederich and Colonius [12], as well as that of Zou et al. [64], effectively found greater audiovisual facilitation in the elderly compared to a young population, while the two other studies did not find a difference [65,66]. Perhaps the difference between these studies is related to the activation of visuomotor covert mechanisms in cases where spatial orientation is involved; see, for instance, the theory of the motor basis of visual attention [67]. As the design of the current study involves locating and moving the eyes to the stimuli, the absence of audiovisual integration improvement for elderly participants is in line with some of the previously cited studies. Perhaps, the more complex the task is, the less apparent the component of audiovisual integration (i.e., a ceiling effect).
Among the studies that report an improvement in audiovisual integration, the study of Diederich is the closest to the current study [12], as it also assesses the reaction time of left and right saccades to visual vs. audiovisual targets. Some differences in the study designs of Diederich and Colonius and the current one could explain the differences in results. Firstly, Diederich and Colonius used speakers emitting bursts of white noise for their auditory signal, while we used a pure-tone sound of 2000 Hz. This choice of the auditory signal will impact sound localization. A pure-tone sound of 2000 Hz is harder to localize than a burst of white noise. Indeed, a large spectrum width (as for white noise) increases the accuracy of space localization, and frequencies between 1000 and 3000 Hz have the poorest localization accuracy [68]. Moreover, the elderly population has decreased abilities in sound localization in space [69,70,71]. Thus, a potential enhancement of audiovisual benefits for the elderly could have been compromised by the inability to localize the pure-tone sound during the vergences tests. Good sound localization is important for audiovisual integration, which becomes more important with the spatial proximity of auditory and visual modalities [39]. Another difference between our study and that of Diederich et al. concerns the diode activation paradigm. Diederich et al. used a “step” paradigm, where the target LED light is on at the same time as the fixation LED is switched off, while our study used an “overlap” paradigm, where the target LED light is on before the fixation LED is switched off. The sequence of such events impacts the attentional mechanisms of eye movement preparation [72,73], which are known to be affected by aging [61,74,75]. Finally, it is important to note that the ranges of ages in our study and the study of Diederich are different. Although the means are similar, the range in our study was very large, ranging from 50 to 84 years. In the study of Diederich, the range was more restricted, from 65 to 75 years.
Further research about these differences in study designs could likely help to better understand why audiovisual integration is found to be higher in the elderly than in young persons in some studies but not in others, although such audiovisual integration is always present.

4.2. Decrease Followed by Increase in Audiovisual Integration with Age

Considering the whole population (young and elderly groups), the increase in AVI(Lat) indicates the deterioration of audiovisual integration for convergence with age between young and elderly groups. However, considering the elderly participants alone, the decrease in AVI(Lat) and the increase in AVI(AVel) and AVI(Amp) indicate an improvement in audiovisual integration with age within the elderly group.
In other words, the evolution of the audiovisual integration for convergence with age seems to follow a dual pattern: (i) Deterioration for the elderly relative to young adults and (ii) an improvement with age within the senior group. However, no effect of age was found for audiovisual integration for divergences and saccades, either between the young and elderly or within the elderly group.
The first part of this dual pattern (deterioration between young and elderly) is surprising as, to our knowledge, no study to date has found multisensorial deterioration between the young and elderly. However, this is the first study assessing the convergences of eye movement. Convergence is the most complex and fragile eye movement, the first to be affected by age, fatigue, and neurologic problems, which could explain this result. The studies dealing with the aging effect on multisensory integration selected their task to be achievable by young or elderly with a similar effort, which is not the case for convergences eye movements.
However, the second part of this dual pattern (improvement for an elderly population) suggests that multisensory integration can increase with age as a compensatory strategy to overcome age-related difficulties.

4.3. The Effect of Presbycusis on the Audiovisual Integration

As mentioned in the introduction, the studies assessing the effect of presbycusis on audiovisual integration are scarce and have inconsistent results. The studies of Tye-Murray et al. [21] and Reis et Escada [22] assess the effect of presbycusis on visual enhancement produced by speechreading (speech comprehension aided by the visual cues of the face of the speaker). This enhancement of speech comprehension with visual cues is a well-documented phenomenon. It occurs for people with presbycusis, as shown in the studies of Tye-Murray et al. and Reis et Escada, as well as for listeners with normal hearing in a noisy environment [76] or clear and intelligible speech [77,78]. It should be noted that this enhancement can easily be assimilated into the audiovisual integration abilities, but its mechanism seems subtle. Indeed, according to models of speech perception with lipreading, this speech enhancement also relies on the ability to lipread, i.e., the capacity to understand speech with only the visual cues of the speaker’s face (without auditory cues), and to encode the auditory information [21]. The study of Tye-Murray did not find better enhancement given by speechreading for a presbycusis group compared to an age-matched group of normal listeners. They conclude that age-related hearing loss does not lead to better audiovisual integration. On the contrary, the study of Reis and Escada found greater enhancement by speechreading for the group with presbycusis than the age-matched group of normal listeners, which could then arise from better audiovisual integration. The study of Puschmann et al. [23], using a cross-modal distractor paradigm in a categorization task, found that people with presbycusis make more errors when confronted with a multisensory distractor than normal-hearing people of the same age, and then suggest that presbycusis affects the processing of audio-visual information. Finally, a study by Rosemann and Thiel [24] compared the McGurk illusion [25] in a presbycusis population and a normal-hearing population of the same age. They found that the illusion is more pronounced in presbycusis participants and conclude that age-related hearing loss enhances audiovisual integration abilities.
Based on all these studies, it is hard to obtain a clear view of the effects of presbycusis on audiovisual integration. However, the current study using simple eye movement responses to the target, which are the fastest reaction times in humans, enables more direct evidence for increased audiovisual integration with age-related hearing loss. In previous studies, audiovisual integration effects might have been hidden, for instance, by delays in the speech or manual responses. Eye movements are fast and easy to record and provide a more objective data basis.
This result brings new insights concerning the literature on multisensory integration and aging. The reduced sensitivity of sensory systems induced with aging was one of the main hypotheses explaining the improvement of multisensory integration with age. Thus, the results of the current study are coherent with this hypothesis and further show that the capacity of multisensory integration occurs regardless of age.

4.4. The Effect of Selective Attention Loss on Audiovisual Integration

The literature has already highlighted that selective attention was one of the cognitive top-down mechanisms impacting multisensory integration [28]. Indeed, selective attention improves the perception of the task-relevant stimulus and suppresses the perception of the task-irrelevant stimulus. Focused attention on a specific sensory modality decreases cortical activity for the other sensory modalities, leading to a decrease in multisensory integration if an attempted sensory modality arrives [79,80,81]. It is also a well-known phenomenon that selective attention is degraded with age [42,43,44,45].
The influence of selective attention loss on multisensory integration abilities for the elderly has benefits and disadvantages. On one hand, the elderly are more likely to be distracted by incongruent sensory modalities [47,48,82]. On the other hand, they are more receptive to the multisensory integration of congruent stimuli. The loss of selective attention with age is otherwise one of the hypotheses explaining the improvement of multisensory integration with age [28]. The result of the current study confirms the impact of selective attention on multisensory integration and is consistent with this last hypothesis. In the current study, the participants were not informed of the audiovisual condition, and the only instruction they received was to look at the LEDs as rapidly and as accurately as possible. They were therefore more focused on the visual component, and some were not even aware of the presence of the auditory component. Thus, it is possible that the participants with greater selective attention capacity were more focused and did not integrate the auditory component.

5. Conclusions

The present study introduces a novel research field on the influence of physiologic age-related hearing loss and the audiovisual integration capacities evaluated with saccades and vergence eye movement to visual vs. audiovisual targets using a pure tone. The eye movement results alone show that aging effects are specific to some parameters of eye movements, namely the latency, while velocity or amplitude do not change drastically (see Table 1). The major results are (i) an improvement of audiovisual integration for convergences with age within the elderly group and (ii) an improvement of audiovisual integration for saccades with the deterioration of hearing and selective attention, independently of age.
As convergence is the more complex eye movement, the fact that audiovisual integration improves within the elderly group highlights multisensory compensatory mechanisms that can be mobilized, particularly for perception and action towards targets dependent on depth (near vs. far space).
The improvement of audiovisual integration of saccades with the loss of selective attention as measured by the Stroop test confirms the influence of top-down attentional mechanisms on the quality of multisensory integration.
We suggest that the evaluation of audiovisual integration via eye movement tests could be relevant and helpful for orienting auditory patient rehabilitation. If neuroplasticity and regeneration of hair cells decreasing with presbycusis are not possible, multisensory integration could be of great importance for the quality of audition and quality of life.

Author Contributions

M.C. co-designed the study, conducted the experiments, analyzed the data, performed the statistics, and co-wrote the manuscript. Z.K. designed the study, developed the algorithms for data analysis, and co-wrote the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

M.C. was financed by the ANRT CIFRE grant (n° 2018/1075) and the society Audilab Versailles (SIREN 828892059). The CIFRE program—Conventions Industrielles de Formation par la REcherche—subsidizes any company under French law that hires a doctoral student to place him or her at the heart of a research collaboration with a public laboratory. The work must prepare for the defense of a thesis.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by Ethics Committee “Ile de France II” (N° ID RCB: 019-A02602−55, approved the 3 October 2020).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank Paul Seimandi for providing comments on the manuscript, Julie Bestel, Audilab, and ANRT CIFRE for the financial support for CHAVANT M., and the INJS (Institut National de Jeunes Sourds de Paris) for access to their sound booth cabin.

Conflicts of Interest

Z.K., Research Director at the CNRS, presides over the CNRS spinoff Orasis-Eye Analytics and Rehabilitation Kapoula, PhD, HDR, EMBA.

Patents

REMOBI: US885 1669, WO2011073288; AIDEAL: PCT/EP2021/062224 7 May 2021.

Appendix A

The way to read these tables is the same as for Table 4, explained in results Section 3.3—AVI and Hearing.
Table A1. Relations between AVI(PVel) and Hearing.
Table A1. Relations between AVI(PVel) and Hearing.
AVI(PVel)~PTA AVI(PVel)~SRT50AVI(PVel)~SND50
MovementaStdErrort ValueaStdErrort ValueaStdErrort Value
DFor EG + YG−0.1270.552−0.231−0.0040.557−0.0081.2581.5280.824
For EG−0.0330.522−0.064−0.2380.55−0.432−0.0391.683−0.023
CFor EG + YG−1.1970.711−1.684−0.9810.724−1.3560.4212.060.204
For EG−1.1310.816−1.387−0.8170.873−0.935−0.8222.79−0.295
LSFor EG + YG0.0720.8950.08−0.4570.925−0.494−1.0612.692−0.394
For EG0.5510.8320.6630.6930.8970.7723.5662.8531.25
RSFor EG + YG1.4621.0481.3950.3371.0780.313−3.7552.798−1.342
For EG1.1471.210.9470.0491.30.038−3.8973.55−1.098
Table A2. Relations between AVI(AVel) and Hearing.
Table A2. Relations between AVI(AVel) and Hearing.
AVI(AVel)~PTA AVI(AVel)~SRT50AVI(AVel)~SND50
MovementaStdErrort ValueaStdErrort ValueaStdErrort Value
DFor EG + YG0.0660.0830.7860.050.0840.5940.1340.240.558
For EG0.090.0821.0960.0590.0870.6740.0910.2840.319
CFor EG + YG−0.0110.108−0.104−0.020.109−0.1820.2170.2990.725
For EG−0.0730.109−0.67−0.0670.115−0.5780.1540.3590.429
LSFor EG + YG0.0380.0930.412−0.0930.096−0.973−0.0340.269−0.126
For EG0.0630.0990.635−0.0660.107−0.615−0.1550.327−0.473
RSFor EG + YG0.080.1050.7630.0050.1070.0490.3040.2941.034
For EG0.0610.110.558−0.0160.118−0.1380.1860.3430.544
Table A3. Relations between AVI(Amp) and Hearing.
Table A3. Relations between AVI(Amp) and Hearing.
AVI(Amp)~PTA AVI(Amp)~SRT50AVI(Amp)~SND50
MovementaStdErrort ValueaStdErrort ValueaStdErrort Value
DFor EG + YG0.1510.340.4450.2150.3420.6280.9110.9490.96
For EG0.2650.2960.8950.1570.3150.4980.311.0020.309
CFor EG + YG−0.0970.398−0.243−0.0510.402−0.1270.7851.0870.722
For EG−0.2840.386−0.736−0.1310.411−0.3190.2911.2760.228
LSFor EG + YG0.0520.1050.497−0.1020.108−0.941−0.0810.3−0.269
For EG0.1050.1090.963−0.0540.118−0.46−0.2460.353−0.698
RSFor EG + YG0.1330.1071.2430.0490.110.4430.2060.2980.693
For EG0.1350.1121.2030.0510.1210.4270.0670.3490.192

References

  1. Lin, F.R.; Thorpe, R.; Gordon-Salant, S.; Ferrucci, L. Hearing loss prevalence and risk factors among older adults in the United States. J. Gerontol. A Biol. Sci. Med. Sci. 2011, 66, 582–590. [Google Scholar] [CrossRef] [PubMed]
  2. Yamasoba, T.; Lin, F.R.; Someya, S.; Kashio, A.; Sakamoto, T.; Kondo, K. Current concepts in age-related hearing loss: Epidemiology and mechanistic pathways. Hear. Res. 2013, 303, 30–38. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Lin, F.R.; Ferrucci, L.; Metter, E.J.; An, Y.; Zonderman, A.B.; Resnick, S.M. Hearing loss and cognition in the Baltimore Longitudinal Study of Aging. Neuropsychology 2011, 25, 763–770. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Gallacher, J.; Ilubaera, V.; Ben-Shlomo, Y.; Bayer, A.; Fish, M.; Babisch, W.; Elwood, P. Auditory threshold, phonologic demand, and incident dementia. Neurology 2012, 79, 1583–1590. [Google Scholar] [CrossRef] [PubMed]
  5. Li, C.-M.; Zhang, X.; Hoffman, H.; Cotch, M.F.; Themann, C.L.; Wilson, M.R. Hearing impairment associated with depression in US adults, National Health and Nutrition Examination Survey 2005–2010. JAMA Otolaryngol. Head Neck Surg. 2014, 140, 293–302. [Google Scholar] [CrossRef]
  6. Amieva, H.; Ouvrard, C.; Giulioli, C.; Meillon, C.; Rullier, L.; Dartigues, J.-F. Self-reported hearing loss, hearing aids, and cognitive decline in elderly adults: A 25-year study. J. Am. Geriatr. Soc. 2015, 63, 2099–2104. [Google Scholar] [CrossRef]
  7. Curhan, S.G.; Willett, W.; Grodstein, F.; Curhan, G. Longitudinal Study of Hearing Loss and Subjective Cognitive Function Decline in Men. 2019. Available online: https://www.sciencedirect.com/science/article/abs/pii/S1552526018336069 (accessed on 24 June 2019).
  8. Tun, P.A.; McCoy, S.; Wingfield, A. Aging, hearing acuity, and the attentional costs of effortful listening. Psychol. Aging 2009, 24, 761. [Google Scholar] [CrossRef] [Green Version]
  9. Lin, F.R.; Yaffe, K.; Xia, J.; Xue, Q.-L.; Harris, T.B.; Purchase-Helzner, E.; Satterfield, S.; Ayonayon, H.N.; Ferrucci, L.; Simonsick, E.M.; et al. Hearing loss and cognitive decline in older adults. JAMA Intern. Med. 2013, 173, 293–299. [Google Scholar] [CrossRef]
  10. Stein, B.E.; Stanford, T.R. Multisensory integration: Current issues from the perspective of the single neuron. Nat. Rev. Neurosci. 2008, 9, 255–266. [Google Scholar] [CrossRef]
  11. Colonius, H.; Diederich, A. The optimal time window of visual-auditory integration: A reaction time analysis. Front. Integr. Neurosci. 2010, 4, 11. [Google Scholar] [CrossRef] [Green Version]
  12. Diederich, A.; Colonius, H.; Schomburg, A. Assessing age-related multisensory enhancement with the time-window-of-integration model. Neuropsychologia 2008, 46, 2556–2562. [Google Scholar] [CrossRef] [PubMed]
  13. Laurienti, P.J.; Kraft, R.A.; Maldjian, J.A.; Burdette, J.H.; Wallace, M.T. Semantic congruence is a critical factor in multisensory behavioral performance. Exp. Brain Res. 2004, 158, 405–414. [Google Scholar] [CrossRef] [PubMed]
  14. Mozolic, J.L.; Hugenschmidt, C.E.; Peiffer, A.M.; Laurienti, P.J. Modality-specific selective attention attenuates multisensory integration. Exp. Brain Res. 2008, 184, 39–52. [Google Scholar] [CrossRef] [PubMed]
  15. Talsma, D.; Doty, T.J.; Woldorff, M.G. Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration? Cereb. Cortex 2007, 17, 679–690. [Google Scholar] [CrossRef] [Green Version]
  16. Bavelier, D.; Neville, H.J. Cross-modal plasticity: Where and how? Nat. Rev. Neurosci. 2002, 3, 443–452. [Google Scholar] [CrossRef] [PubMed]
  17. Butler, B.; Lomber, S. Functional and structural changes throughout the auditory system following congenital and early-onset deafness: Implications for hearing restoration. Front. Syst. Neurosci. 2013, 7, 92. [Google Scholar] [CrossRef] [Green Version]
  18. Landry, S.; Bacon, B.A.; Leybaert, J.; Gagné, J.-P.; Champoux, F. Audiovisual Segregation in Cochlear Implant Users. PLoS ONE 2012, 7, e33113. [Google Scholar] [CrossRef]
  19. Landry, S.P.; Guillemot, J.-P.; Champoux, F. Temporary Deafness Can Impair Multisensory Integration: A Study of Cochlear-Implant Users. Psychol. Sci. 2013, 24, 1260–1268. [Google Scholar] [CrossRef]
  20. Stevenson, R.; Sheffield, S.W.; Butera, I.M.; Gifford, R.H.; Wallace, M. Multisensory integration in cochlear implant recipients. Ear Hear. 2017, 38, 521–538. [Google Scholar] [CrossRef]
  21. Tye-Murray, N.; Sommers, M.S.; Spehar, B. Audiovisual Integration and Lipreading Abilities of Older Adults with Normal and Impaired Hearing. Ear Hear. 2007, 28, 656–668. [Google Scholar] [CrossRef]
  22. Reis, L.R.; Escada, P. Effect of speechreading in presbycusis: Do we have a third ear? Otolaryngol. Pol. 2017, 71, 38–44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Puschmann, S.; Sandmann, P.; Bendixen, A.; Thiel, C.M. Age-related hearing loss increases cross-modal distractibility. Hear. Res. 2014, 316, 28–36. [Google Scholar] [CrossRef] [PubMed]
  24. Rosemann, S.; Thiel, C.M. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment. NeuroImage 2018, 175, 425–437. [Google Scholar] [CrossRef] [PubMed]
  25. Macdonald, J.; McGurk, H. Visual influences on speech perception processes. Percept. Psychophys. 1978, 24, 253–257. [Google Scholar] [CrossRef] [Green Version]
  26. Musacchia, G.; Arum, L.; Nicol, T.; Garstecki, D.; Kraus, N. Audiovisual Deficits in Older Adults with Hearing Loss: Biological Evidence. Ear Hear. 2009, 30, 505–514. [Google Scholar] [CrossRef]
  27. Meredith, M.A.; Keniston, L.P.; Allman, B.L. Multisensory dysfunction accompanies crossmodal plasticity following adult hearing impairment. Neuroscience 2012, 214, 136–148. [Google Scholar] [CrossRef] [Green Version]
  28. Mozolic, J.L.; Hugenschmidt, C.E.; Peiffer, A.M.; Laurienti, P.J. Multisensory Integration and Aging. In The Neural Bases of Multisensory Processes; Murray, M.M., Wallace, M.T., Eds.; CRC Press/Taylor & Francis: Boca Raton, FL, USA, 2012. [Google Scholar]
  29. SJones, A.; Noppeney, U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021, 138, 1–23. [Google Scholar] [CrossRef]
  30. Laurienti, P.J.; Burdette, J.H.; Maldjian, J.A.; Wallace, M.T. Enhanced multisensory integration in older adults. Neurobiol. Aging 2006, 27, 1155–1163. [Google Scholar] [CrossRef]
  31. Peiffer, A.M.; Mozolic, J.L.; Hugenschmidt, C.E.; Laurienti, P.J. Age-related multisensory enhancement in a simple audiovisual detection task. NeuroReport 2007, 18, 1077–1081. [Google Scholar] [CrossRef]
  32. Diederich, A.; Colonius, H. Bimodal and trimodal multisensory enhancement: Effects of stimulus onset and intensity on reaction time. Percept. Psychophys. 2004, 66, 1388–1404. [Google Scholar] [CrossRef] [Green Version]
  33. Perrott, D.R.; Saberi, K.; Brown, K.; Strybel, T.Z. Auditory psychomotor coordination and visual search performance. Percept. Psychophys. 1990, 48, 214–226. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Lueck, C.J.; Crawford, T.J.; Savage, C.J.; Kennard, C. Auditory-visual interaction in the generation of saccades in man. Exp. Brain Res. 1990, 82, 149–157. [Google Scholar] [CrossRef] [PubMed]
  35. Frens, M.A.; van Opstal, A.J.; van der Willigen, R.F. Spatial and temporal factors determine auditory-visual interactions in human saccadic eye movements. Percept. Psychophys. 1995, 57, 802–816. [Google Scholar] [CrossRef] [Green Version]
  36. Hughes, H.C.; Nelson, M.D.; Aronchick, D.M. Spatial characteristics of visual-auditory summation in human saccades. Vis. Res. 1998, 38, 3955–3963. [Google Scholar] [CrossRef] [Green Version]
  37. Corneil, B.D.; van Wanrooij, M.; Munoz, D.P.; van Opstal, A.J. Auditory-Visual Interactions Subserving Goal-Directed Saccades in a Complex Scene. J. Neurophysiol. 2002, 88, 438–454. [Google Scholar] [CrossRef] [Green Version]
  38. Colonius, H.; Diederich, A. Multisensory Interaction in Saccadic Reaction Time: A Time-Window-of-Integration Model. Cogn. Neurosci. 2004, 16, 1000–1009. [Google Scholar] [CrossRef]
  39. van Wanrooij, M.M.; Bell, A.H.; Munoz, D.P.; van Opstal, A.J. The effect of spatial–temporal audiovisual disparities on saccades in a complex scene. Exp. Brain Res. 2009, 198, 425–437. [Google Scholar] [CrossRef] [Green Version]
  40. Kapoula, Z.; Pain, E. Differential Impact of Sound on Saccades Vergence and Combined Eye Movements: A Multiple Case Study. CSMC 2020, 5, 95. [Google Scholar] [CrossRef]
  41. Stroop, J.R. Studies of interference in serial verbal reactions. Exp. Psychol. 1935, 18, 643–662. [Google Scholar] [CrossRef]
  42. Bugg, J.M.; DeLosh, E.L.; Davalos, D.B.; Davis, H.P. Age Differences in Stroop Interference: Contributions of General Slowing and Task-Specific Deficits. Aging, Neuropsychol. Cogn. 2007, 14, 155–167. [Google Scholar] [CrossRef]
  43. ATroyer, K.; Leach, L.; Strauss, E. Aging and Response Inhibition: Normative Data for the Victoria Stroop Test. Aging Neuropsychol. Cogn. 2006, 13, 20–35. [Google Scholar] [CrossRef] [PubMed]
  44. Bayard, S.; Erkes, J.; Moroni, C. Victoria Stroop Test: Normative Data in a Sample Group of Older People and the Study of Their Clinical Applications in the Assessment of Inhibition in Alzheimer’s Disease. Arch. Clin. Neuropsychol. 2011, 26, 653–661. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Graf, P.; Uttl, B.; Tuokko, H. Color- and picture-word stroop tests: Performance changes in old age. J. Clin. Exp. Neuropsychol. 1995, 17, 390–415. [Google Scholar] [CrossRef] [PubMed]
  46. Talsma, D.; Woldorff, M.G. Selective Attention and Multisensory Integration: Multiple Phases of Effects on the Evoked Brain Activity. J. Cogn. Neurosci. 2005, 17, 1098–1114. [Google Scholar] [CrossRef] [PubMed]
  47. Yang, L.; Hasher, L. The Enhanced Effects of Pictorial Distraction in Older Adults. J. Gerontol. Ser. B 2007, 62, P230–P233. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Poliakoff, E.; Ashworth, S.; Lowe, C.; Spence, C. Vision and touch in ageing: Crossmodal selective attention and visuotactile spatial interactions. Neuropsychologia 2006, 44, 507–517. [Google Scholar] [CrossRef]
  49. Andrés, P.; Parmentier, F.B.R.; Escera, C. The effect of age on involuntary capture of attention by irrelevant sounds: A test of the frontal hypothesis of aging. Neuropsychologia 2006, 44, 2564–2568. [Google Scholar] [CrossRef]
  50. Olusanya, B.O.; Davis, A.C.; Hoffman, H.J. Hearing loss grades and the International classification of functioning, disability and health. Bull. World Health Organ. 2019, 97, 725–728. [Google Scholar] [CrossRef]
  51. Lafon, J.-C. Le Test Phonétique et la Mesure de L’audition; Editions Centrex; Dunod: Paris, France, 1964. [Google Scholar]
  52. Collège National D’Audioprothèse. Précis D’audioprothèse Tome III-Le Contrôle D’efficacité Prothétique. 2007. Available online: https://www.college-nat-audio.fr/index.php/ouvrage/precis-daudioprothese-tome-iii-le-controle-defficacite-prothetique (accessed on 28 August 2021).
  53. Kassner, M.; Patera, W.; Bulling, A. Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, Washington, DC, USA, 13–17 September 2014; ACM: New York, NY, USA, 2014; pp. 1151–1160. [Google Scholar] [CrossRef]
  54. Hung, G.K.; Semmlow, J.L.; Ciufferda, K.J. A Dual-Mode Dynamic Model of the Vergence Eye Movement System. IEEE Trans. Biomed. Eng. 1986, 33, 1021–1028. [Google Scholar] [CrossRef]
  55. Hung, G.K. Models of Oculomotor Control; World Scientific: Singapore, 2001. [Google Scholar]
  56. Semmlow, J.L.; Hung, G.K.; Horng, J.-L.; Ciuffreda, K.J. Disparity vergence eye movements exhibit preprogrammed motor control. Vis. Res. 1994, 34, 1335–1343. [Google Scholar] [CrossRef]
  57. Bayard, S.; Erkes, J.; Moroni, C.; Collège des Psychologues Cliniciens spécialisés en Neuropsychologie du Languedoc Roussillon (CPCN-LR). F-SV Test du Stroop Victoria- Adaptation Francophone Matériel, Consignes, Procédure de Cotation et Données Normatives. 2009. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.475.3053&rep=rep1&type=pdf (accessed on 20 February 2022).
  58. Munoz, D.P.; Broughton, J.R.; Goldring, J.E.; Armstrong, I.T. Age-related performance of human subjects on saccadic eye movement tasks. Exp. Brain Res. 1998, 121, 391–400. [Google Scholar] [CrossRef] [PubMed]
  59. Sharpe, J.A.; Zackon, D.H. Senescent Saccades: Effects of Aging on Their Accuracy, Latency and Velocity. Acta Oto-Laryngol. 1987, 104, 422–428. [Google Scholar] [CrossRef] [PubMed]
  60. Rambold, H.; Neumann, G.; Sander, T.; Helmchen, C. Age-related changes of vergence under natural viewing conditions. Neurobiol. Aging 2006, 27, 163–172. [Google Scholar] [CrossRef] [PubMed]
  61. Yang, Q.; Wang, T.; Su, N.; Xiao, S.; Kapoula, Z. Specific saccade deficits in patients with Alzheimer’s disease at mild to moderate stage and in patients with amnestic mild cognitive impairment. AGE 2013, 35, 1287–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Diaconescu, A.O.; Hasher, L.; McIntosh, A.R. Visual dominance and multisensory integration changes with age. NeuroImage 2013, 65, 152–166. [Google Scholar] [CrossRef] [Green Version]
  63. Mahoney, J.R.; Li, P.C.C.; Oh-Park, M.; Verghese, J.; Holtzer, R. Multisensory integration across the senses in young and old adults. Brain Res. 2011, 1426, 43–53. [Google Scholar] [CrossRef] [Green Version]
  64. Zou, Z.; Chau, B.K.H.; Ting, K.-H.; Chan, C.C.H. Aging Effect on Audiovisual Integrative Processing in Spatial Discrimination Task. Front. Aging Neurosci. 2017, 9, 374. [Google Scholar] [CrossRef]
  65. Wu, J.; Yang, W.; Gao, Y.; Kimura, T. Age-related multisensory integration elicited by peripherally presented audiovisual stimuli. NeuroReport 2012, 23, 616–620. [Google Scholar] [CrossRef]
  66. Stephen, J.M.; Knoefel, J.E.; Adair, J.; Hart, B.; Aine, C.J. Aging-related changes in auditory and visual integration measured with MEG. Neurosci. Lett. 2010, 484, 76–80. [Google Scholar] [CrossRef] [Green Version]
  67. Rizzolatti, G.; Riggio, L.; Sheliga, B.M. Space and Selective attention. In Attention and Performance XV: Conscious and Nonconscious Information Processing; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  68. Risoud, M.; Hanson, J.-N.; Gauvrit, F.; Renard, C.; Lemesre, P.-E.; Bonne, N.-X.; Vincent, C. Sound source localization. Eur. Ann. Otorhinolaryngol. Head Neck Dis. 2018, 135, 259–264. [Google Scholar] [CrossRef]
  69. Dobreva, M.S.; O’Neill, W.E.; Paige, G.D. Influence of aging on human sound localization. J. Neurophysiol. 2011, 105, 2471–2486. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Abel, S.M.; Giguère, C.; Consoli, A.; Papsin, B.C. The effect of aging on horizontal plane sound localization. J. Acoust. Soc. Am. 2000, 108, 743–752. [Google Scholar] [CrossRef] [PubMed]
  71. Noble, W.; Byrne, D.; Lepage, B. Effects on sound localization of configuration and type of hearing impairment. J. Acoust. Soc. Am. 1994, 95, 992–1005. [Google Scholar] [CrossRef] [PubMed]
  72. Fischer, B.; Weber, H. Express saccades and visual attention. Behav. Brain Sci. 1993, 16, 553–567. [Google Scholar] [CrossRef] [Green Version]
  73. Fischer, B.; Breitmeyer, B. Mechanisms of visual attention revealed by saccadic eye movements. Neuropsychologia 1987, 25, 73–83. [Google Scholar] [CrossRef]
  74. Commodari, E.; Guarnera, M. Attention and aging. Aging Clin. Exp. Res. 2008, 20, 578–584. [Google Scholar] [CrossRef]
  75. Growth, K.; Allen, P. Visual attention and aging. Front. Biosci. A J. Virtual Libr. 2000, 5, D284-97. [Google Scholar] [CrossRef] [Green Version]
  76. Sumby, W.H.; Pollack, I. Visual Contribution to Speech Intelligibility in Noise. J. Acoust. Soc. Am. 1954, 26, 212. [Google Scholar] [CrossRef]
  77. Arnold, P.; Hill, F. Bisensory augmentation: A speechreading advantage when speech is clearly audible and intact. Br. J. Psychol. 2001, 92 Pt 2, 339–355. [Google Scholar] [CrossRef]
  78. Reisberg, D.; McLean, J.; Goldfield, A. Easy to hear but hard to understand: A lip-reading advantage with intact auditory stimuli. In Hearing by Eye: The Psychology of Lip-Reading; Lawrence Erlbaum Associates, Inc.: Hillsdale, NJ, USA, 1987; pp. 97–113. [Google Scholar]
  79. JHaxby, V.; Horwitz, B.; Ungerleider, L.G.; Maisog, J.M.; Pietrini, P.; Grady, C.L. The functional organization of human extrastriate cortex: A PET-rCBF study of selective attention to faces and locations. J. Neurosci. 1994, 14, 6336–6353. [Google Scholar] [CrossRef]
  80. Laurienti, P.J.; Burdette, J.H.; Wallace, M.T.; Yen, Y.-F.; Field, A.S.; Stein, B.E. Deactivation of Sensory-Specific Cortex by Cross-Modal Stimuli. J. Cogn. Neurosci. 2002, 14, 420–429. [Google Scholar] [CrossRef] [PubMed]
  81. Johnson, J.A.; Zatorre, R.J. Neural substrates for dividing and focusing attention between simultaneous auditory and visual events. NeuroImage 2006, 31, 1673–1681. [Google Scholar] [CrossRef] [PubMed]
  82. Healey, M.K.; Campbell, K.L.; Hasher, L. Chapter 22 Cognitive aging and increased distractibility: Costs and potential benefits. In Progress in Brain Research; Sossin, W.S., Lacaille, J.-C., Castellucci, V.F., Belleville, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2008; Volume 169, pp. 353–363. [Google Scholar] [CrossRef]
Figure 1. Top-view of the position of the LEDs for the saccades test (left) and for the vergence test (right).
Figure 1. Top-view of the position of the LEDs for the saccades test (left) and for the vergence test (right).
Brainsci 12 00591 g001
Figure 2. Hearing loss (HL) and Stroop score characterization of EG. (A) Classification of the PTA according to the WHO scale, for EG. (B) Classification of the Stroop_I/D according to the model built in the study of Bayard et al. [57], for elderly participants. This model allows the categorization of the Stroop_I/D score as a function of the participant’s age above 50 years. The score can be classified into five categories: “deficit”, “limit”, “mean”, “superior”, and “very superior”.
Figure 2. Hearing loss (HL) and Stroop score characterization of EG. (A) Classification of the PTA according to the WHO scale, for EG. (B) Classification of the Stroop_I/D according to the model built in the study of Bayard et al. [57], for elderly participants. This model allows the categorization of the Stroop_I/D score as a function of the participant’s age above 50 years. The score can be classified into five categories: “deficit”, “limit”, “mean”, “superior”, and “very superior”.
Brainsci 12 00591 g002
Figure 3. Correlations and regression lines between convergence AVI and age, for (A) AVI(Lat), (B) AVI(AVel), and (C) AVI(Amp), regarding the whole population (young and elderly groups) and elderly group alone. Blue squares represent the young participants; red triangles represent the elderly participants. The black solid line represents the regression line for the whole population and the red dashed line represents the regression line for the elderly group. “r” represents the Pearson correlation coefficient and “p” represents the significance of the slopes of the regression line.
Figure 3. Correlations and regression lines between convergence AVI and age, for (A) AVI(Lat), (B) AVI(AVel), and (C) AVI(Amp), regarding the whole population (young and elderly groups) and elderly group alone. Blue squares represent the young participants; red triangles represent the elderly participants. The black solid line represents the regression line for the whole population and the red dashed line represents the regression line for the elderly group. “r” represents the Pearson correlation coefficient and “p” represents the significance of the slopes of the regression line.
Brainsci 12 00591 g003
Figure 4. Correlations and regression lines between AVI(Lat) for saccades and hearing tests (PTA and SRT50), for the whole population (YG + EG) and for the elderly participants alone (EG). The left column (A,C) is for Left Saccade (LS); the right column (B,D) is for Right Saccade (RS); the first line is for the PTA; the second line is for SRT50. Blue squares represent the participants of the YG; red triangles represent the participants of the EG. The black solid line represents the regression line for the whole population (YG + EG) and the red dashed line represents the regression line for the elderly participants alone (EG). “r” represents the Pearson correlation coefficient and “p” represents the significance of the slope of the regression line.
Figure 4. Correlations and regression lines between AVI(Lat) for saccades and hearing tests (PTA and SRT50), for the whole population (YG + EG) and for the elderly participants alone (EG). The left column (A,C) is for Left Saccade (LS); the right column (B,D) is for Right Saccade (RS); the first line is for the PTA; the second line is for SRT50. Blue squares represent the participants of the YG; red triangles represent the participants of the EG. The black solid line represents the regression line for the whole population (YG + EG) and the red dashed line represents the regression line for the elderly participants alone (EG). “r” represents the Pearson correlation coefficient and “p” represents the significance of the slope of the regression line.
Brainsci 12 00591 g004
Figure 5. Correlations and regression lines between saccades AVI(Lat) and Stroop_D/I, for (A) Left Saccade and (B) Right Saccade, regarding the whole population (YG + EG). Blue squares represent the participants of the YG; red triangles represent the participants of the EG. The black solid line represents the regression line for the whole population (YG + EG). “r” represents the Pearson correlation coefficient and “p” represents the significance of the slope of the regression line.
Figure 5. Correlations and regression lines between saccades AVI(Lat) and Stroop_D/I, for (A) Left Saccade and (B) Right Saccade, regarding the whole population (YG + EG). Blue squares represent the participants of the YG; red triangles represent the participants of the EG. The black solid line represents the regression line for the whole population (YG + EG). “r” represents the Pearson correlation coefficient and “p” represents the significance of the slope of the regression line.
Brainsci 12 00591 g005
Table 1. Eye movement characteristics depending on the group. Means and sd.
Table 1. Eye movement characteristics depending on the group. Means and sd.
LatPvelAvelAmp
DYG309 (52)74 (27) 19 (4) 62 (18)
EG371 (62)78 (25)15 (4)47 (15)
CYG307 (57)69 37)23 (5)67 (20)
EG360 (62) 75 (35) 20 (6)52 (21)
LSYG245 (32)341 (69)83 (7)93 (8)
EG298 (55)318 (85)78 (8)88 (8)
RSYG255 (35)346 (72)84 (6)94 (6)
EG307 (57)338 (93)78 (8)89 (10)
Table 2. Hearing characteristics depending on the group. Means and sd.
Table 2. Hearing characteristics depending on the group. Means and sd.
PTASRT50SND50
YG9.0 (3.0)21.6 (4.2)−11.8 (2.9)
EG22.8 (8.7) 31.2 (8.3)−10.4 (2.7)
Table 3. Relationships between the AVI scores and age, from simple linear regressions analyses.
Table 3. Relationships between the AVI scores and age, from simple linear regressions analyses.
AVI(Lat)~AgeAVI(PVel)~AgeAVI(Avel)~AgeAVI(Amp)~Age
MovementInterceptacorInterceptacorInterceptacorInterceptacor
DFor EG + YG−23.8080.3520.1368.657−0.19−0.1470.694−0.008−0.0416.083−0.075−0.1
For EG18.003−0.242−0.041−11.9220.0990.036−2.1840.0340.08−9.330.1450.096
CFor EG + YG−41.2140.695 *0.226−2.9290.1420.0851.27−0.016−0.0624.555−0.058−0.061
For EG126.017−1.748 *−0.24514.389−0.1080.026−13.0290.193 *0.307−41.2970.613 *0.279
LSFor EG + YG−13.377−0.1940.090−4.3530.2180.08−0.470.0210.08−1.0520.0320.106
For EG−27.3340.0010.00023.981−0.199−0.0320.2360.0120.0184.069−0.042−0.057
RSFor EG + YG−15.771−0.097−0.0425.309−0.048−0.0151.011−0.001−0.0040.3250.0120.037
For EG19.101−0.62−0.099−37.6280.5880.069−2.7120.0550.074−0.4550.0250.031
“*”: 0.01 < p < 0.05
Table 4. Relations between AVI(Lat) and hearing.
Table 4. Relations between AVI(Lat) and hearing.
AVI(Lat)~PTAAVI(Lat)~SRT50AVI(Lat)~SND50
MovementaStdErrort-valueaStdErrort-valueaStdErrort-value
DFor EG + YG−0.0110.837−0.0130.6550.8410.7780.3772.3020.164
For EG−0.2980.829−0.3590.6320.8740.7240.6732.5680.262
CFor EG + YG0.3190.990.3220.8450.9960.848−2.9372.808−1.046
For EG1.1110.9981.1141.6311.0451.561−3.4313.137−1.094
LSFor EG + YG−1.276.0.708−1.802−1.478 *0.71−2.0810.5271.850.285
For EG−1.361.0.8−1.701−1.780 *0.834−2.1330.1452.2720.064
RSFor EG + YG−2.056 **0.755−2.722−1.853 *0.77−2.4071.2242.040.6
For EG−1.859 *0.857−2.17−1.827 *0.911−2.0060.42.550.157
“.”: 0.05 < p < 0.1; “*”: 0.01 < p < 0.05; “**”: 0.001 < p < 0.01
Table 5. Relations between AVI(Lat) and Stroop scores.
Table 5. Relations between AVI(Lat) and Stroop scores.
AVI(Lat)~Stroop_I/DAVI(PVel)~Stroop_I/DAVI(AVel)~Stroop_I/DAVI(Amp)~Stroop_I/D
MovementaStdErrort-ValueaStdErrort-ValueaStdErrort-ValueaStdErrort-Value
DFor EG + YG7.10113.3230.5334.0067.2250.554−1.521.318−1.153−2.124.88−0.434
For EG−1.3613.423−0.1014.9288.7560.563−1.5521.377−1.127−2.3625.017−0.471
CFor EG + YG−15.33415.781−0.9726.24711.9350.5230.3551.7870.1991.3466.5520.205
For EG−14.08416.207−0.8692.65314.1530.187−0.0761.854−0.041−0.6016.6−0.091
LSFor EG + YG−21.831.11.307−1.9316.715.2930.4380.591.60.3690.8131.7970.452
For EG−21.62512.948−1.676.43214.7190.4370.7421.7540.4230.881.930.456
RSFor EG + YG−25.842 *12.271−2.106−11.90218.094−0.6581.61.80.8891.5561.8450.843
For EG−21.04514.139−1.488−18.81921.155−0.891.5541.9150.8111.3941.970.707
“.”: 0.05 < p < 0.1; “*”: 0.01 < p < 0.05
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chavant, M.; Kapoula, Z. Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test. Brain Sci. 2022, 12, 591. https://doi.org/10.3390/brainsci12050591

AMA Style

Chavant M, Kapoula Z. Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test. Brain Sciences. 2022; 12(5):591. https://doi.org/10.3390/brainsci12050591

Chicago/Turabian Style

Chavant, Martin, and Zoï Kapoula. 2022. "Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test" Brain Sciences 12, no. 5: 591. https://doi.org/10.3390/brainsci12050591

APA Style

Chavant, M., & Kapoula, Z. (2022). Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test. Brain Sciences, 12(5), 591. https://doi.org/10.3390/brainsci12050591

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop