Next Article in Journal
A Narrative Review of the Association between Dental Abnormalities and Chemotherapy
Next Article in Special Issue
Unilateral Hearing Loss and Auditory Asymmetry in Mitochondrial Disease: A Scoping Review
Previous Article in Journal
Endovascular Embolization of Intracranial Aneurysms Using Target Tetra Detachable Coils: Angiographic and Clinical Results from a Single Center
Previous Article in Special Issue
The Effect of Anxiolytics on Tinnitus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

P1 and N1 Characteristics in Individuals with Normal Hearing and Hearing Loss, and Cochlear Implant Users: A Pilot Study

by
Hye Yoon Seol
1,
Soojin Kang
2,
Sungkean Kim
3,4,
Jihoo Kim
4,
Euijin Kim
3,
Sung Hwa Hong
5 and
Il Joon Moon
6,7,*
1
Department of Communication Disorders, Ewha Womans University, Seoul 03760, Republic of Korea
2
Center for Digital Humanities and Computational Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
3
Department of Human–Computer Interaction, Hanyang University, Ansan 15588, Republic of Korea
4
Department of Interdisciplinary Robot Engineering Systems, Hanyang University, Ansan 15588, Republic of Korea
5
Department of Otolaryngology-Head and Neck Surgery, Soree Ear Clinic, Seoul 07560, Republic of Korea
6
Hearing Research Laboratory, Samsung Medical Center, Seoul 16419, Republic of Korea
7
Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 03181, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2024, 13(16), 4941; https://doi.org/10.3390/jcm13164941
Submission received: 31 July 2024 / Revised: 14 August 2024 / Accepted: 20 August 2024 / Published: 22 August 2024

Abstract

:
Background: It has been reported in many previous studies that the lack of auditory input due to hearing loss (HL) can induce changes in the brain. However, most of these studies have focused on individuals with pre-lingual HL and have predominantly compared the characteristics of those with normal hearing (NH) to cochlear implant (CI) users in children. This study examined the visual and auditory evoked potential characteristics in NH listeners, individuals with bilateral HL, and CI users, including those with single-sided deafness. Methods: A total of sixteen participants (seven NH listeners, four individuals with bilateral sensorineural HL, and five CI users) completed speech testing in quiet and noise and evoked potential testing. For speech testing, the Korean version of the Hearing in Noise Test was used to assess individuals’ speech understanding ability in quiet and in noise (noise from the front, +90 degrees, and −90 degrees). For evoked potential testing, visual and auditory (1000 Hz, /ba/, and /da/) evoked potentials were measured. Results: The results showed that CI users understood speech better than those with HL in all conditions except for the noise from +90 and −90 degrees. In the CI group, a decrease in P1 amplitudes was noted across all channels after implantation. The NH group exhibited the highest amplitudes, followed by the HL group, with the CI group (post-CI) showing the lowest amplitudes. In terms of auditory evoked potentials, the smallest amplitude was observed in the pre-CI condition regardless of the type of stimulus. Conclusions: To the best of our knowledge, this is the first study that examined visual and auditory evoked potentials based on various hearing profiles. The characteristics of evoked potentials varied across participant groups, and further studies with CI users are necessary, as there are significant challenges in collecting and analyzing evoked potentials due to artifact issues on the CI side.

1. Introduction

Hearing loss (HL) refers to the impairment of auditory function, and numerous studies have reported its negative impacts on quality of life [1,2,3]. When HL occurs, individuals begin to experience difficulty perceiving and understanding speech, ultimately leading to communication breakdown. These breakdowns in communication can manifest in educational and occupational settings, affecting individuals’ performance. In addition, recent studies have reported a potential link between HL and dementia, leading to an increased focus on studying HL [4]. Therefore, early and appropriate interventions for HL are important. HL management or aural rehabilitation typically begins with hearing aids, which amplify sounds to improve audibility [5]. Healthcare professionals first program hearing aids based on individuals’ auditory characteristics [6]. Then, the individuals go through adjustment periods, during which they communicate in various situations, such as quiet and noisy places. Based on these experiences, the hearing aids are fine-tuned or adjusted during follow-up visits. If individuals do not receive much benefit from hearing aids, a cochlear implant (CI) could be considered. Similar to hearing aids, the primary goal of CIs is to enhance audibility by providing electrical stimulation, ultimately improving quality of life. CIs have been known to be effective for individuals for whom rehabilitation with conventional hearing aids was not effective [7,8,9]. Over time, more people have received benefit from CIs, as technological advancements in CIs have been made and CI candidacy has been expanded [10,11]. For example, there have been significant advancements in CI technology, transitioning from single-channel to multi-channel devices with varying numbers of electrodes. Additionally, features, such as beamforming and noise reduction, have been developed to enhance auditory perception in everyday environments. In 2008, a hybrid model combing a CI and a hearing aid was introduced. The design of CIs has also diversified, with behind-the-ear and off-the-ear models, reflecting ongoing improvements in both functionality and user convenience. However, it is important to note that CI use does not always lead to improvements in speech understanding [12,13,14,15,16,17,18,19], and individual variability in CI outcomes remains one of the most challenging issues in CI research. One of the well-researched benefits of CIs is in the area of speech recognition. Factors that can affect speech recognition include the duration of HL, the onset of HL, the position of the CI electrode array, the duration of auditory deprivation, and so on [16,18,20,21,22]. Generally, it is known that individuals who wear hearing devices early or have a shorter duration of HL tend to have better outcomes [23]. However, there are cases where individuals with similar demographic information and medical histories exhibit poor outcomes, and the underlying mechanisms for this are not well understood [12].
Currently, in clinical settings, speech performance can be examined using tests at the monosyllable, word, and sentence levels [24]. To reflect various communication environments, there are tools available that allow for testing not only in quiet conditions but also in noise conditions. In addition to these tests, electrophysiological testing, such as cortical auditory evoked potentials (CAEP), is also conducted to measure CI benefits at the central level. CAEPs refer to electrical activities from neurons in the auditory cortex [25]. They are typically recorded with electrodes placed on the scalp and have been greatly investigated, as they could objectively assess the functionality and maturity of the central auditory system [26]. For adults, the main components of the CAEPs are the P1–N1–P2 complex, which appear 50 to 200 ms after stimulation. P1 is the positive peak appearing after 50 ms, N1 is the negative peak appearing after 100 ms, and P2 is the positive peak appearing after 200 ms [27]. Research related to CAEP and HL has been conducted across various age groups, including children and the elderly, and across different hearing devices, such as hearing aids and CIs. Sandmann et al. (2012) recruited 22 individuals (11 with NH and 11 CI users with post-lingual HL) and explored their visual evoked potentials (VEPs), which are electrical signals evoked by visual stimulation. For analysis, the authors compared amplitudes and latencies of P100 (the same as P1, a positive peak occurring generally at 100 ms), N150, and P270. When comparing P100 VEP, CI users had a lower P100 VEP amplitude and shorter P100 latencies than those with NH, and recruitment of the right auditory cortex was observed in CI users [28]. For children with congenital HL, Sharma et al. (2002) reported that those with less than 3.5 years of HL showed P1 latencies within normal limits within 6 months of using a CI [29].
While there are studies that have explored electrophysiological characteristics in individuals with normal hearing (NH) and HL using CAEPs, most of these studies have focused on children, with relatively few examining adults. Additionally, there is a significant lack of research involving individuals using hearing aids or CIs, as well as those with diverse hearing characteristics. This study explores the P1 and N1 characteristics of individuals with various auditory characteristics. We hypothesized that individuals with HL would exhibit a larger amplitude than those with NH, and that CI users would show a smaller amplitude after CI surgery.

2. Materials and Methods

2.1. Participants

The inclusion criteria for this prospective cohort study, conducted from 2020 to 2023, included adults aged 19 years and older. The NH group included individuals with hearing test results showing thresholds of 25 dB HL or below at frequencies from 125 to 8000 Hz. The HL group included those with sensorineural HL above 30 dB HL based on the four-frequency pure-tone average (500, 1000, 2000, and 4000 Hz). The CI group comprised individuals with severe to profound HL scheduled for CI surgery. The exclusion criteria included individuals who had difficulty watching TV from a distance of 1 m and those with otological pathology and neurological and mental disorders. All experimental procedures were approved by Samsung Medical Center’s Institutional Review Board. Prior to testing, an informed consent document was obtained from the participants.

2.2. Pure-Tone Audiometry

Pure-tone audiometry was performed in a sound booth using insert earphones and an AudioStar Pro (Grason-Stadler, Eden Prairie, MN, USA) audiometer.

2.3. Speech Testing

The Korean version of the Hearing in Noise Test (K-HINT) is a speech-in-noise test widely used in South Korea. The K-HINT has a total of 240 sentences (20 sentences per list × 12 lists). The target sentences were presented through a loudspeaker located in front of the participants in a sound-treated booth using HINT pro 7.2 (Natus, Middleton, WI, USA). The participants were asked to listen to the sentences and then repeat them back to the tester. The testing was conducted in four conditions: quiet, noise from the front, noise from +90°, and noise from −90°. The presentation level was 65 dBA. In the conditions involving noise, the testing began at a 0 dB signal-to-noise ratio (SNR). If the participant correctly repeated the sentence, the level of the speech was decreased by 4 dB. If the participant incorrectly repeated the sentence, the speech level was increased by 2 dB. The K-HINT was performed twice, and the average was calculated for all participants.

2.4. CAEP Recording and Preprocessing

All recordings were conducted with the ActiveTwo BioSemi system (Amsterdam, The Netherlands). The electrodes were placed according to the 10–20 system. The electrodes were placed at Cz, Pz, Fz, T7, T8, O1, O2, and Oz. Reference electrodes were placed on the mastoids. Four additional electrodes were placed on the upper and lower part of the left eye and the outer canthi of both eyes for electro-oculograms. The sampling rate was 2048 Hz, and electrode impedances were kept below 5 kΩ. The acquired EEG data were filtered using a 1–30 Hz band-pass filter. Visual inspection was performed for movement artifacts. Then, the data were epoched from 100 ms pre-stimulus to 500 ms post-stimulus. For baseline correction, the epochs were deducted from the mean value of the pre-stimulus interval. Epochs including significant physiological artifacts (amplitude exceeding ±75 μV) at any electrodes were rejected. All EEG preprocessing steps and additional analysis procedures were carried out using MATLAB 2021 (Mathworks Inc., Natick, MA, USA).

2.5. Visual Evoked Potentials

For the VEP, the reversed displays of checkerboard patterns, which are widely used due to simplicity, were used. The stimuli consisted of black and white squares (10 × 10 pattern-reversal checkerboard) and were presented on a monitor using Neuroscan STIM2 (Charlotte, NC, USA). The stimulus interval was randomized between 900 ms and 1100 ms and the stimulus was repeatedly presented 500 times. The stimulus interval was 1000 ms. The participants were seated in a comfortable chair in a darkened room and asked to look at the center of the checkerboard image during the testing. The distance between the participant and the monitor was 1 m. After preprocessing the EEG data, peak detection of the elicited P100 component was performed. Trials were averaged at O1, Oz, and O2 electrodes, respectively. The most positive peak amplitude of the P100 component was defined between 60 and 200 ms, which was determined by the time interval between the zero crossings of the grand averaged waveform.

2.6. Auditory Evoked Potentials

AEPs in response to three stimuli (/da/, /ba/, and 1000 Hz) were recorded. Each stimulus had a duration of 170 ms and was repeatedly presented 300 times. The stimulus interval ranged from 900 ms to 1100 ms. The duration of the stimulus was 170 ms, and the stimulus interval was 1000 ms. The stimuli were presented though a speaker located 1 m from the participants with the presentation level of 65 dB, and white noise was presented in the opposite ear using an insert earphone at 45 dB. The participants sat on a comfortable chair and watched a movie without sound during the testing. After preprocessing the data for each stimulus type, the trials were averaged at Fz, Cz, Pz, T7, and T8 electrodes, respectively. The N100 component was elicited during the paradigms and the most negative peak amplitude of the N100 component was defined between the designated time windows. The time intervals between the zero crossings of the grand averaged N100 waveforms by each stimulus were used to determine the time windows. Specifically, the time window for the /ba/ stimulus was set between 90 and 190 ms, for the /da/ stimulus between 60 and 185 ms, and for the 1000 Hz tone stimulus between 60 and 185 ms after the stimulus onset.

3. Results

3.1. Participant Characteristics

A total of 16 participants were enrolled in the study. Characteristics of the CI users are described in Table 1. Among the participants, seven had NH, four had bilateral sensorineural HL, and five were CI users. The age range of the participants was from 23 to 67 years old, with a mean age of 44.1 years (SD = 14.2). The four-frequency pure-tone averages of the NH group were 7.1 dB in the right ear and 5.9 dB in the left ear. Individuals in the HL group had moderately severe sensorineural HL in both ears, and their pure-tone averages were 62.5 dB in the right ear and 59.7 dB in the left ear. In the CI group, the pure-tone averages were 59.0 dB in the right ear and 81.5 dB in the left ear. Among the CI group, two individuals (CI1 and CI2) had single-sided deafness in the right ear, and the four-frequency pure-tone averages were 15 and 11.2 dB for CI1 and CI2.

3.2. Speech Performance

The average K-HINT scores in quiet were 9.2, 61.4, and 31.7 dBA for the NH, HL, and CI groups. When the noise was presented from the front, the average scores were −4.4, 3.4, and 0.3 dB SNR for the NH, HL, and CI groups. In the noise from +90° condition, the average scores were −14.5, −0.3, and 2.6 dB SNR for the NH, HL, and CI groups. Lastly, in the noise from −90° condition, the average scores were −12.4, −1.4, and −0.2 dB SNR for the NH, HL, and CI groups. In all test conditions, the NH groups showed the best performance. Comparing the HL and CI groups, the CI users performed better than those with HL in all conditions except for the noise from +90° and −90° conditions.

3.3. Visual Evoked Potentials

Figure 1 illustrates the grand average waveforms for VEP. Only the P1 amplitudes were assessed. Table 2 describes the average P1 amplitudes for all three groups. For O1, the P1 amplitudes for the NH and HL groups were 5.7 and 5.3 µV. For the CI group, the P1 amplitude before implantation was 5.6 µV, and after implantation it was 4.3 µV. For Oz, the amplitudes were 7.6 and 6.1 µV for the NH and HL groups. The amplitudes were 8.2 and 5.5 µV in the pre- and post-CI conditions for the CI users. Lastly, for O2, the P1 amplitudes were observed to be 7.4 and 5.7 µV for the NH and HL groups. The P1 amplitudes were 7.8 and 5.3 µV before and after implantation for the CI users. For the CI group, overall reductions in the P1 amplitudes were observed for all channels after implantation. The NH group showed the largest amplitudes, followed by the HL group and the CI group (post-CI).

3.4. Auditory Evoked Potentials

Table 3 describes the average N1 peak amplitudes for all three groups in each stimulus condition. Figure 2 illustrates the grand average waveforms for AEP. When comparing the average N1 peak amplitude between the NH and HL groups, the N1 peak amplitudes at 1000 Hz were found to be smaller in the HL group. For /ba/, the N1 peak amplitude in the HL group was larger at all electrodes except T8. For /da/, the N1 peak amplitude was larger in the NH group at the Fz, T7, and T8 electrodes. The smallest N1 peak amplitude was observed in the pre-CI condition for all stimuli except for /da/ at the T8 electrode.

4. Discussion

This study investigated speech understanding ability as well as P1 and N1 characteristics in individuals with NH and HL, and CI users. The results revealed that, in terms of speech recognition, the NH listeners were able to understand speech better than the HL and CI users. Comparing between the HL and CI groups, the CI group’s speech performance was better than the HL group except for in two conditions (+90° and −90°). Regarding the P1 and N1 responses, P1 (VEP) and N1 (AEP) amplitudes were compared for all groups. Compared to the NH group, the HL and CI groups showed smaller P1 amplitudes in response to VEP. A comparison of the pre- and post-CI conditions showed that implantation led to smaller P1 amplitudes. For AEP, the smallest N1 amplitude was generally observed in the pre-CI condition. When comparing the average N1 peak amplitude between the NH and HL groups, the HL group showed smaller N1 peak amplitudes at 1000 Hz. For the /ba/ stimulus, the N1 peak amplitude was greater in the HL group at all electrodes except T8. Conversely, for the /da/ stimulus, the NH group had larger N1 peak amplitudes at the Fz, T7, and T8 electrodes. Findings of the study are in line with previous studies, to some extent, showing that HL led to poor speech performance and smaller P1 and N1 amplitudes [28,30,31,32,33,34]. Campbell and Sharma (2014) examined the VEP response in nine individuals with NH and eight individuals with mild to moderate HL, and found that the P1, N1, and P2 amplitudes were larger in the HL group [33]. Harkrider et al. (2006) investigated N1–P2 cortical evoked responses in 11 young adults and 10 older adults with NH, and 10 older adults with mild-to-moderate HL. They found that young adults with NH showed the smallest N1 amplitudes, while among the older adults with NH and HL, the older adults with HL exhibited larger N1 amplitudes [34]. This study is meaningful, as it investigated the P1 and N1 characteristics in individuals with various hearing characteristics, including those with NH, HL, CI users, and specifically those with single-sided deafness using a CI. However, many aspects need to be improved to explain the electrophysiological characteristics in such diverse groups. Firstly, similar to other studies, the small sample size in this study made it difficult to generalize the findings, so further research with a larger sample size is necessary. Regarding various hearing profiles, it would be meaningful to explore the changes in electrophysiological characteristics based on not only single-sided deafness but also various etiologies, durations of HL, types of hearing devices, and so on. While this study used only eight electrodes, employing more electrodes could allow for a more detailed examination of brain-region-specific characteristics. It is also important to investigate brain characteristics at different time points in terms of brain plasticity. Stropahl et al. (2017) noted that cortical change patterns may vary depending on the degree of HL, and how these patterns change following sensory restoration via CI is not well understood [18]. The authors emphasized the need for prospective longitudinal studies with various time points to understand the factors driving cortical changes and the nature of these patterns. Therefore, including pre-CI as well as post-CI conditions at intervals such as three, six, and nine months, and so on, would be beneficial. Lastly, while electrophysiological components include not only P1 and N1 but also P2, P300, and others, this study was limited by artifacts, allowing comparison only of the P1 amplitude in VEP and the N1 amplitude in AEP. Artifacts in EEG conducted on CI users have been a longstanding issue, and studies are ongoing to address this [35,36]. Intartaglia et al. (2022) mentioned the importance of the development of reliable techniques of EEG artifact removal since the artifacts caused by CIs could distort EEG responses. However, even though various methods have been employed in past research, it is still difficult to determine the best EEG artifact removal technique, as there is a lack of documentation and agreement.
In summary, research studies investigating electrophysiological characteristics in individuals with various types and degrees of HL has shown mixed findings. This variability can be attributed to factors, such as the limited amount of research in this area, small sample sizes, and methodological differences, including stimuli presentation levels and the signal-to-noise ratio (SNR) [12,37,38,39]. Regarding the stimuli presentation level, in this study, the presentation level of all stimuli was fixed at 65 dBA. Several studies have mentioned that different CAEP response characteristics can be observed depending on the stimulus intensity and SNRs [38,39]. Gurkan et al. (2023) examined CAEP responses in three groups (NH, mild HL, and moderate HL) with a stimulus (/g/) presented at 10, 20, and 30 dB SNRs [38]. The authors reported that those with moderate HL showed decreasing N1–P2 responses as SNRs decreased. Considering that everyday communication environments involve various SNRs and that stimulus levels are perceived differently depending on the type and degree of HL, it is essential to take hearing status into account when determining the presentation level of the stimulus. Another methodological difference is the criteria used to distinguish participant characteristics. While this study did not divide CI users into well-performing and poor-performing groups based on speech performance, some studies have categorized participants in this way, leading to differing findings [31,40]. Kim et al. (2016) investigated the VEP characteristics of 14 CI users and 12 NH listeners. When dividing the CI group into poor performing and well performing, it was found that the poor-performing CI group showed larger P1 amplitudes in the right temporal cortex and smaller P1 amplitudes at electrodes near the occipital cortex [31]. Doucet et al. (2006) also investigated VEP characteristics in 13 CI users and 16 NH listeners. Similar to Kim et al. (2016), this study divided the CI users into poor-performing and well-performing groups. While no differences in P1 and N1 amplitudes were observed, the study found that the P2 amplitude was significantly larger at the occipital site [40].
As for future research, Pisoni et al. (2018) mentioned that future studies related to CI should focus more on individuals with poor outcomes rather than those with good outcomes [12]. They noted that, aside from device checks and commonly conducted audiological testing in clinical settings, there is a lack of evaluation and intervention protocols for individuals with poor outcomes. They also suggested that additional assessments for cognitive domains should be incorporated. Since each individual with HL has unique characteristics, it is essential to go beyond the conventional audiological testing and include assessments of cognitive and psychosocial domains to accurately understand their characteristics at peripheral and central levels.

Author Contributions

H.Y.S., S.K. (Soojin Kang) and S.H.H. conceptualized the study; H.Y.S. and S.K. (Soojin Kang) conceived the experiments; S.K. (Sungkean Kim), S.H.H. and I.J.M. reviewed the concept; H.Y.S. and S.K. (Soojin Kang) conducted the experiments; S.K. (Sungkean Kim), J.K. and E.K. analyzed the results; H.Y.S. wrote the main paper; all authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT; No. 2020R1F1A1075752).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Samsung Medical Center (protocol code 2020-01-093-005, 2020.05.07).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dobie, R.A.; Van Hemel, S.; Council, N.R. Committee on Disability Determination for Individuals with Hearing Impairments. Hearing Loss: Determining Eligibility for Social Security Benefits; National Academies Press: Washington, DC, USA, 2004. [Google Scholar]
  2. Ciorba, A.; Bianchini, C.; Pelucchi, S.; Pastore, A. The impact of hearing loss on the quality of life of elderly adults. Clin. Interv. Aging 2012, 7, 159–163. [Google Scholar] [CrossRef]
  3. Punch, J.L.; Hitt, R.; Smith, S.W. Hearing loss and quality of life. J. Commun. Disord. 2019, 78, 33–45. [Google Scholar] [CrossRef]
  4. Thomson, R.S.; Auduong, P.; Miller, A.T.; Gurgel, R.K. Hearing loss as a risk factor for dementia: A systematic review. Laryngoscope Investig. Otolaryngol. 2017, 2, 69–79. [Google Scholar] [CrossRef]
  5. Boothroyd, A. Adult aural rehabilitation: What is it and does it work? Trends Amplif. 2007, 11, 63–71. [Google Scholar] [CrossRef]
  6. Palmer, C.V.; Ortmann, A. Hearing loss and hearing aids. Neurol. Clin. 2005, 23, 901–918. [Google Scholar] [CrossRef]
  7. Seol, H.Y.; Moon, I.J. Hearables as a gateway to hearing health care. Clin. Exp. Otorhinolaryngol. 2022, 15, 127–134. [Google Scholar] [CrossRef]
  8. Laske, R.D.; Veraguth, D.; Dillier, N.; Binkert, A.; Holzmann, D.; Huber, A.M. Subjective and objective results after bilateral cochlear implantation in adults. Otol. Neurotol. 2009, 30, 313–318. [Google Scholar] [CrossRef]
  9. Ketterer, M.C.; Haussler, S.M.; Hildenbrand, T.; Speck, I.; Peus, D.; Rosner, B.; Knopke, S.; Graebel, S.; Olze, H. Binaural Hearing Rehabilitation Improves Speech Perception, Quality of Life, Tinnitus Distress, and Psychological Comorbidities. Otol. Neurotol. 2020, 41, e563–e574. [Google Scholar] [CrossRef]
  10. Varadarajan, V.V.; Sydlowski, S.A.; Li, M.M.; Anne, S.; Adunka, O.F. Evolving Criteria for Adult and Pediatric Cochlear Implantation. Ear Nose Throat J. 2021, 100, 31–37. [Google Scholar] [CrossRef]
  11. Sladen, D.P.; Gifford, R.H.; Haynes, D.; Kelsall, D.; Benson, A.; Lewis, K.; Zwolan, T.; Fu, Q.J.; Gantz, B.; Gilden, J.; et al. Evaluation of a revised indication for determining adult cochlear implant candidacy. Laryngoscope 2017, 127, 2368–2374. [Google Scholar] [CrossRef]
  12. Pisoni, D.B.; Kronenberger, W.G.; Harris, M.S.; Moberly, A.C. Three challenges for future research on cochlear implants. World J. Otorhinolaryngol. Head Neck Surg. 2017, 3, 240–254. [Google Scholar] [CrossRef] [PubMed]
  13. Pisoni, D.B.; Cleary, M. Learning, memory, and cognitive processes in deaf children following cochlear implantation. In Cochlear Implants: Auditory Prostheses and Electric Hearing; Springer: Berlin/Heidelberg, Germany, 2004; pp. 377–426. [Google Scholar]
  14. Pisoni, D.B.; Cleary, M.; Geers, A.E.; Tobey, E.A. Individual differences in effectiveness of cochlear implants in children who are prelingually deaf: New process measures of performance. Volta Rev. 1999, 101, 111. [Google Scholar]
  15. Zeng, F.G. Trends in cochlear implants. Trends Amplif. 2004, 8, 1–34. [Google Scholar] [CrossRef]
  16. Holden, L.K.; Finley, C.C.; Firszt, J.B.; Holden, T.A.; Brenner, C.; Potts, L.G.; Gotter, B.D.; Vanderhoof, S.S.; Mispagel, K.; Heydebrand, G.; et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear Hear. 2013, 34, 342–360. [Google Scholar] [CrossRef] [PubMed]
  17. Lazard, D.S.; Giraud, A.-L.; Gnansia, D.; Meyer, B.; Sterkers, O. Understanding the deafened brain: Implications for cochlear implant rehabilitation. Eur. Ann. Otorhinolaryngol. Head Neck Dis. 2012, 129, 98–103. [Google Scholar] [CrossRef]
  18. Stropahl, M.; Chen, L.C.; Debener, S. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations. Hear. Res. 2017, 343, 128–137. [Google Scholar] [CrossRef]
  19. Stropahl, M.; Plotz, K.; Schonfeld, R.; Lenarz, T.; Sandmann, P.; Yovel, G.; De Vos, M.; Debener, S. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing. Neuroimage 2015, 121, 159–170. [Google Scholar] [CrossRef]
  20. Finley, C.C.; Holden, T.A.; Holden, L.K.; Whiting, B.R.; Chole, R.A.; Neely, G.J.; Hullar, T.E.; Skinner, M.W. Role of electrode placement as a contributor to variability in cochlear implant outcomes. Otol. Neurotol. 2008, 29, 920–928. [Google Scholar] [CrossRef] [PubMed]
  21. Escudé, B.; James, C.; Deguine, O.; Cochard, N.; Eter, E.; Fraysse, B. The size of the cochlea and predictions of insertion depth angles for cochlear implant electrodes. Audiol. Neurotol. 2006, 11, 27–33. [Google Scholar] [CrossRef] [PubMed]
  22. Lazard, D.S.; Vincent, C.; Venail, F.; Van de Heyning, P.; Truy, E.; Sterkers, O.; Skarzynski, P.H.; Skarzynski, H.; Schauwers, K.; O’Leary, S.; et al. Pre-, per- and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: A new conceptual model over time. PLoS ONE 2012, 7, e48739. [Google Scholar] [CrossRef]
  23. Owens, E.; Kessler, D.; Telleen, C.; Schubert, E. The Minimal Auditory Capabilities Battery (Instruction Manual); Auditec: St. Louis, MO, USA, 1981. [Google Scholar]
  24. Zhao, E.E.; Dornhoffer, J.R.; Loftus, C.; Nguyen, S.A.; Meyer, T.A.; Dubno, J.R.; McRackan, T.R. Association of Patient-Related Factors With Adult Cochlear Implant Speech Recognition Outcomes: A Meta-analysis. JAMA Otolaryngol. Head Neck Surg. 2020, 146, 613–620. [Google Scholar] [CrossRef]
  25. Kraus, N.; Nicol, T. Auditory Evoked Potentials. In Encyclopedia of Neuroscience; Binder, M.D., Hirokawa, N., Windhorst, U., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 214–218. [Google Scholar]
  26. Čeponien, R.; Rinne, T.; Näätänen, R. Maturation of cortical sound processing as indexed by event-related potentials. Clin. Neurophysiol. 2002, 113, 870–882. [Google Scholar] [CrossRef]
  27. Lightfoot, G. Summary of the N1-P2 Cortical Auditory Evoked Potential to Estimate the Auditory Threshold in Adults. Semin. Hear. 2016, 37, 1–8. [Google Scholar] [CrossRef] [PubMed]
  28. Sandmann, P.; Dillier, N.; Eichele, T.; Meyer, M.; Kegel, A.; Pascual-Marqui, R.D.; Marcar, V.L.; Jancke, L.; Debener, S. Visual activation of auditory cortex reflects maladaptive plasticity in cochlear implant users. Brain 2012, 135 Pt 2, 555–568. [Google Scholar] [CrossRef] [PubMed]
  29. Sharma, A.; Dorman, M.; Spahr, A.; Todd, N.W. Early cochlear implantation in children allows normal development of central auditory pathways. Ann. Otol. Rhinol. Laryngol. Suppl. 2002, 189, 38–41. [Google Scholar] [CrossRef]
  30. Wingfield, A.; Tun, P.A.; McCoy, S.L. Hearing loss in older adulthood: What it is and how it interacts with cognitive performance. Curr. Dir. Psychol. Sci. 2005, 14, 144–148. [Google Scholar] [CrossRef]
  31. Kim, M.B.; Shim, H.Y.; Jin, S.H.; Kang, S.; Woo, J.; Han, J.C.; Lee, J.Y.; Kim, M.; Cho, Y.S.; Moon, I.J.; et al. Cross-Modal and Intra-Modal Characteristics of Visual Function and Speech Perception Performance in Postlingually Deafened, Cochlear Implant Users. PLoS ONE 2016, 11, e0148466. [Google Scholar] [CrossRef]
  32. Seol, H.Y.; Park, S.; Ji, Y.S.; Hong, S.H.; Moon, I.J. Impact of hearing aid noise reduction algorithms on the speech-evoked auditory brainstem response. Sci. Rep. 2020, 10, 10773. [Google Scholar] [CrossRef]
  33. Campbell, J.; Sharma, A. Cross-modal re-organization in adults with early stage hearing loss. PLoS ONE 2014, 9, e90594. [Google Scholar] [CrossRef]
  34. Harkrider, A.W.; Plyler, P.N.; Hedrick, M.S. Effects of hearing loss and spectral shaping on identification and neural response patterns of stop-consonant stimuli. J. Acoust. Soc. Am. 2006, 120, 915–925. [Google Scholar] [CrossRef]
  35. Intartaglia, B.; Zeitnouni, A.G.; Lehmann, A. Recording EEG in cochlear implant users: Guidelines for experimental design and data analysis for optimizing signal quality and minimizing artifacts. J. Neurosci. Methods 2022, 375, 109592. [Google Scholar] [CrossRef] [PubMed]
  36. Li, X.; Nie, K.; Karp, F.; Tremblay, K.L.; Rubinstein, J.T. Characteristics of stimulus artifacts in EEG recordings induced by electrical stimulation of cochlear implants. In Proceedings of the 2010 3rd International Conference on Biomedical Engineering and Informatics, Yantai, China, 16–18 October 2010; pp. 799–803. [Google Scholar]
  37. McClannahan, K.S.; Backer, K.C.; Tremblay, K.L. Auditory Evoked Responses in Older Adults With Normal Hearing, Untreated, and Treated Age-Related Hearing Loss. Ear Hear. 2019, 40, 1106–1116. [Google Scholar] [CrossRef] [PubMed]
  38. Gurkan, S.; Mungan Durankaya, S. The effect of sensorineural hearing loss on central auditory processing of signals in noise in older adults. Neuroreport 2023, 34, 249–254. [Google Scholar] [CrossRef]
  39. Adler, G.; Adler, J. Influence of stimulus intensity on AEP components in the 80- to 200-millisecond latency range. Audiology 1989, 28, 316–324. [Google Scholar] [CrossRef] [PubMed]
  40. Doucet, M.E.; Bergeron, F.; Lassonde, M.; Ferron, P.; Lepore, F. Cross-modal reorganization and speech perception in cochlear implant users. Brain 2006, 129 Pt 12, 3376–3383. [Google Scholar] [CrossRef]
Figure 1. Grand average waveforms for VEP.
Figure 1. Grand average waveforms for VEP.
Jcm 13 04941 g001
Figure 2. Grand average waveforms for AEP.
Figure 2. Grand average waveforms for AEP.
Jcm 13 04941 g002
Table 1. Participant characteristics.
Table 1. Participant characteristics.
GroupSexAgeFour-Frequency Pure-Tone Average (Right/Left in dB)Etiology of HLDuration of HL (mos)CI SideDevice
NH1F327.1/5.9N/AN/AN/AN/A
NH2F23
NH3F30
NH4F23
NH5M44
NH6F43
NH7M26
HL1M5462.5/59.7Sudden192N/AN/A
HL2M62Unknown24
HL3F63Unknown144
HL4F61Sudden288
CI1M4259.0/81.5Chronic otitis media48LKANSO 2
CI2F51Sudden24LRONDO 2
CI3F56Sudden96RKANSO 2
CI4F36Unknown240LKANSO 2
CI5F48Unknown240LKANSO 2
N/A: Not available.
Table 2. Average P1 amplitudes for Oz, O1, and O2 for the groups.
Table 2. Average P1 amplitudes for Oz, O1, and O2 for the groups.
GroupAverage Amplitudes (µV)
OzO1O2
NH5.77.67.4
HL5.36.15.7
Pre-CI5.68.27.8
Post-CI4.35.55.3
Table 3. Average N1 amplitudes for 1000 Hz, /ba/, and /da/ for the groups.
Table 3. Average N1 amplitudes for 1000 Hz, /ba/, and /da/ for the groups.
StimulusGroupAverage Amplitudes (µV)
FzT7CzT8Pz
1000 HzNH−5.9−2.6−5.1−3.2−3.1
HL−2.5−1.4−2.3−0.7−1.6
Pre-CI−0.2−0.90.1−0.30.1
/ba/NH−3.3−1.9−3.0−2.1−2.0
HL−4.3−2.3−4.3−1.5−2.8
Pre-CI−1.0−0.7−0.9−0.4−0.6
/da/NH−3.3−1.6−3.3−2.3−2.2
HL−3.2−1.4−3.4−1.3−2.3
Pre-CI−1.1−1.0−1.1−1.7−1.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Seol, H.Y.; Kang, S.; Kim, S.; Kim, J.; Kim, E.; Hong, S.H.; Moon, I.J. P1 and N1 Characteristics in Individuals with Normal Hearing and Hearing Loss, and Cochlear Implant Users: A Pilot Study. J. Clin. Med. 2024, 13, 4941. https://doi.org/10.3390/jcm13164941

AMA Style

Seol HY, Kang S, Kim S, Kim J, Kim E, Hong SH, Moon IJ. P1 and N1 Characteristics in Individuals with Normal Hearing and Hearing Loss, and Cochlear Implant Users: A Pilot Study. Journal of Clinical Medicine. 2024; 13(16):4941. https://doi.org/10.3390/jcm13164941

Chicago/Turabian Style

Seol, Hye Yoon, Soojin Kang, Sungkean Kim, Jihoo Kim, Euijin Kim, Sung Hwa Hong, and Il Joon Moon. 2024. "P1 and N1 Characteristics in Individuals with Normal Hearing and Hearing Loss, and Cochlear Implant Users: A Pilot Study" Journal of Clinical Medicine 13, no. 16: 4941. https://doi.org/10.3390/jcm13164941

APA Style

Seol, H. Y., Kang, S., Kim, S., Kim, J., Kim, E., Hong, S. H., & Moon, I. J. (2024). P1 and N1 Characteristics in Individuals with Normal Hearing and Hearing Loss, and Cochlear Implant Users: A Pilot Study. Journal of Clinical Medicine, 13(16), 4941. https://doi.org/10.3390/jcm13164941

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop