Next Article in Journal
Trends of Global Scientific Research on Reclaimed Coal Mine Sites between 2015 and 2020
Previous Article in Journal
Synthesis by Sol–Gel Route of Organic–Inorganic Hybrid Material: Chemical Characterization and In Vitro Release Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Two Self-Fitting User Interfaces for Bimodal CI-Recipients

1
Department of Otolaryngology, Medical School of Hannover, 30625 Hannover, Germany
2
Cluster of Excellence “Hearing4all”, Medical School of Hannover, 30625 Hannover, Germany
3
Advanced Bionics, European Research Center, Feodor-Lynen-Str. 35, 30625 Hannover, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(14), 8411; https://doi.org/10.3390/app13148411
Submission received: 1 June 2023 / Revised: 17 July 2023 / Accepted: 18 July 2023 / Published: 20 July 2023

Abstract

:
Smartphones are increasingly being used to enable patients to play an active role in managing their own health through applications, also called apps. The latest generation of sound processors for cochlear implants offer Bluetooth connectivity that makes it possible to connect smartphones or tablets and thus enable patients to modify their hearing sensation or measure system parameters. However, to achieve a high adoption rate and secure operation of these applications, it is necessary to design intuitive user interfaces (UI) for end users. The main goal of the current study was to evaluate the usability of two different UIs. A second goal was to compare the hearing outcomes based on the patient’s adjustments. The two different UIs were explored in a group of adult and older adult bimodal cochlear-implant users, with adjustments possible for both the cochlear implant and the contralateral hearing aid. One of the UIs comprised a classical equalizer and volume-dial approach, while the second UI followed a 2D-Surface concept, to manipulate the corresponding sound parameters. The participants changed their fitting parameters using both UIs in seven different sound scenes. The self-adjusted settings for the different scenarios were stored and recalled at a later stage for direct comparison. To enable an assessment of reliability and reproducibility, the self-adaptation was also repeated for two of the seven sound scenes. Within minutes, the participants became accustomed to the concept of both UIs and generated their own parameter settings. Both UIs resulted in settings that could be considered similar in terms of spontaneous acceptance and sound quality. Furthermore, both UIs showed high reliability in the test–retest procedure. The time required for adjustment was significantly shorter with the 2D-Surface UI. A closer look at the bimodal aspect shows that participants were able to compensate for differences in loudness and frequencies between the cochlear implant and the hearing aid. The blind comparison test showed that self-adjustment led to a higher acceptance of the sound perception in more than 80% of the cases.

1. Introduction

A cochlear implant (CI) is a prosthetic device for the inner ear that directly stimulates the auditory nerve, thus circumventing the damaged inner hair cells and generating audible sensations, in spite of deafness. In recent years, the indication criteria for a CI have been expanded, and currently many patients who have significant hearing on their contralateral ear receive a CI and use it in parallel with a hearing aid (HA). Users with such a combination of a CI and HA are also called bimodal users. Unlike a CI, a HA amplifies ambient sounds and presents them acoustically to the user. The amplification is calibrated according to prescribed methods. In the calibration of adult hearing aids, there are two commonly recommended prescribed methods for fitting: NAL-NL2 and DSL v.5 [1]. These methods use nonlinear gain fitting strategies based on either loudness normalization or loudness equalization principles. Both strategies take into account various patient and device characteristics. For example, the desired sensation level (DSL) method takes into account the patient’s hearing thresholds, previous amplification experience, number of hearing aid channels, and volume exposure [2]. With current CI systems, it is possible to achieve good speech perception in a quiet environment [3,4,5]. However, in challenging environments, such as with background noise or reverberation, most users still have problems in understanding speech [6,7,8]. However, compared to a CI used alone, bimodal stimulation, i.e., combining electric (CI) and acoustic (HA) stimulation, can already lead to improvements in speech understanding in noise, in sound localization, and in music perception [9,10]. Another important factor impacting speech perception is adequate fitting of the sound processors. Correctly assessing the thresholds and the upper stimulation levels are the most relevant settings in the adjustment process [6]. Generally, the fitting process of a HA or a CI is highly individual. The initial fit is usually made by an expert, i.e., an audiologist. The subsequent fine tuning is based on the expert’s experience, taking into account the user’s feedback. The fine-tuned HA or CI is then worn by the user under different listening conditions in everyday life. Further appointments for fine-tuning generally follow, until a setting is found that is satisfactory for the user [11,12]. Such a fitting process may therefore take several sessions, and the final setting is very dependent on the experience of the expert and the time spent with the participant. It is precisely this point that often leads to problems, particularly for bimodal users, since the CI and HA may not be fitted by the same expert. The different experiences and approaches of the experts has an impact on the hearing sensation with the two devices. One approach to overcoming these problems could be to let the users adjust the HA and CI settings themselves. In everyday situations, this could lead to an individual adjustment of the sound and to an improvement of the auditory sensation. This concept of self fitting was previously investigated for HAs by Gößwein et al. [11] and for cochlea implants by Vroegop et al. [6] and Botros et al. [13]. They showed that self-fitting led to a higher acceptance of the sound in different environments but not to a higher speech intelligibility. However, bimodal users were not considered in these studies. In both studies, starting from an expert baseline setting, the user was allowed to amplify or attenuate certain frequency ranges of the input signal with the help of a tool. The user was thus able to adjust the sound to his or her personal preferences. Considering that the setting options for manipulating the input signal were the same for both devices, a bimodal solution should be feasible. Simultaneous fitting of the CI and the HA with the same self-fitting tool could also be used to compensate for the different fitting approaches used by the different experts during the initial fitting. Considering the fact that smartphones are becoming more and more popular throughout society, and even among the elderly population, they represent the perfect platform for such a self-fitting tool. In combination with the latest CI and HA generations with an integrated Bluetooth interface, the development of such self-care applications, generally called APPs, becomes possible. In existing apps in the HA field, the user interface (UI) mostly follows the basic principle of an equalizer. A good example of such an approach is the myPhonak (Phonak) app, in which the low, mid, and high frequencies, as well as the overall volume, can be adjusted with the help of four sliders [14]. For CIs, there is a similar concept in the Nucleus Smart APP, made by Cochlear. In this APP, the treble, bass, and overall volume can be adjusted [15]. However, other experimental approaches to a user interface can be found in the literature. One of these is the 2D-Surface UI that was used by Dreschler et al. [16] and examined in more detail by Gößwein et al. [11] and Rennies et al. [17]. With the help of this UI, the user can change the sound impression by moving a point with the fingertip on a touch-sensitive rectangular surface. Compared to other UIs, such a 2D-Surface can provide advantages. By minimizing the number of controls, the median duration of a self-fitting session could be reduced to less than one minute; at the same time, a high reproducibility was observed for such self-fitting under test–retest conditions. In comparison, participants required between 1 and 4 min for adaptation to other UIs tested [17]. We, therefore, decided to use the 2S-Surface approach in our study and compare it to the established equalizer-based (EQ) approach. Thus, besides the goal of evaluating a self-fitting tool for bimodal users, another aim of this study was to compare the 2D-Surface UI to the EQ UI and to the current state-of-the-art expert fitting. We also wanted to examine whether the experience gained in the above-mentioned studies could be confirmed in the field of CIs, especially for bimodal users.

2. Materials and Methods

2.1. Study Participants

A total of 18 bimodal, postlingually deaf participants were included in the study. A short overview of their demographics is shown in Table 1. Each of the participants was implanted with an advanced bionics implant (Clarion II or one of the more recent HiRes series) and used the HiRes Optima or HiRes 120 coding strategy. Contralaterally, the participants were fitted with a regular HA, of various types and brands, and had an earmold. Table 2 shows the respective supply of the patient per side. Hearing thresholds on the HA side were better or equal to 80 dB HL from 250 Hz to 1 kHz (see Figure 1), and all participants had at least 50% speech understanding in quiet conditions within the Hochmair-Schulz-Moser (HSM) speech test. In addition, each participant had to have experience using a smartphone, meaning regular use of at least one app per day. Furthermore, all patients had to be in good health to operate a smartphone. All participants were patients of the medical school of Hanover. The screening process initially included an evaluation of the internal patient database according to the aforementioned inclusion criteria. Potential candidates (n = 215) were contacted in order of eligibility and until the required number of participants was reached, and they were asked about their smartphone experience and possible impairments to their motor skills. If a patient met all requirements and was interested in the study, an initial appointment was scheduled. At this appointment, the patient was informed and included in the study.

2.2. User Interfaces

Two different UIs were explored, to establish which would be best suited for setting treble, mid, bass, overall volume, and the balance/emphasis between the HA and the CI. Both UIs allowed the user to make these changes, based on an initial setting that was made by an expert. One of the UIs comprised a classical equalizer and volume-slider approach (see Section 2.2), while the second UI utilized a tactile 2D-Surface concept (see Section 2.3) to manipulate the corresponding sound parameters. The HAs and CI-Processors used in the study operated internally with a sampling rate of 22,050 Hz and a short-time Fourier-transformation (STFT) filter bank with Hamming windows, a frame size of 256 samples, a frame advance of 1 4 and interpolated to 20 bins at Bark resolution. The center frequency for each bin is shown in Table 3.

Equalizer UI

The equalizer UI in Figure 2b allows the adjustment of three frequency ranges: low (center frequency 172 Hz), mid (center frequency 861 Hz), high (center frequency 4.5 kHz), and overall volume. These adjustments can be made for both the HA and the CI simultaneously, or separately, if the two pointers for each slider are decoupled. For each slider, an adjustment of ±10 dB is possible. Using Equation (1), the range r a , the normalized value n v ( k ) (see Table 3—EQ) for every slider (bass, mid, treble, and volume) and the slider position s p , the respective delta gain δ for each filter bank k can be determined. In combination with the volume slider and the sliders in certain frequency ranges, a gain of ±20 dB is possible.
δ ( k ) = r a · ( s p v o l u m e + s p b a s s · n v b a s s ( k ) + s p m i d · n v m i d ( k ) + s p t r e b l e · n v t r e b l e ( k ) )
with: r a = 10 , D s p = { s p Q | 1 s p 1 }

2.3. 2D-Surface UI

The 2D-Surface UI in Figure 2a is based on the concepts presented by Dreschler et al. [16], Gößwein et al. [11], and Rennies et al. [17]. In this approach, the participant can adjust the corresponding sound parameters by moving a point in a square. Just as with the EQ UI, the adjustments can be made for both the HA and the CI simultaneously, or separately, if the two points for the left and right are decoupled. By shifting the point in the coordinate system along the x axis, different specific gain-frequency curves are retrieved. These have their focus in different frequency ranges and can be amplified or reduced by shifting them along the y-axis. Using Equation (2) and the normalized value n v ( k ) (see Table 3—2D-Surface), the delta gain δ for each filter bank k can be determined for each pointer position p p x , y . Figure 3 shows different gain curves for different point positions along the x-axis, with a fixed value of p p y = 0.5. Unlike the EQ user interface, where the filter consists of three bandpasses, this user interface combines a highpass and a lowpass, which means that the gain curves set by the participant might differ between the two approaches. However, a gain of ±20 dB is also possible with this UI.
δ ( k ) = r a · ( p p y + p p x · n v ( k ) )
with: r a = 10 , D p p = { p p x , y Q | 1 p p x , y 1 }

3. Procedures

In this monocentric study, a single-participant design with repeated measures was used. This approach, in which each participant serves as his or her own control, takes into account the heterogeneity that is characteristic of hearing-impaired participants. All measurements were performed on a single study date. An experienced audiologist created the expert settings for the CI and the HA using the regular fitting software and instructed the participants about how to use the APP. The measurements were realized as a randomized crossover study, using an AB/BA design. In this common form of randomized crossover trial, participants are randomly assigned to either treatment A followed by treatment B or treatment B followed by treatment A [18]. All participants were randomly assigned to the two different study arms. Based on this decision, they either started with the EQ UI or with the 2D-Surface UI and were later switched to the other condition. This ensured that possible training effects in the personal self-fitting process had no influence on the data. This also ensured that the familiarization phase for each sound condition took place with the EQ UI or with the 2D-Surface UI randomly. The complete study procedure is shown in Figure 4.

3.1. Expert-Fitting Procedure

For the duration of the study, all participants were fitted with an Audeo Marvel R 90 RIC (Phonak) HA on the hearing-impaired ear, and with a Neptune (Advanced Bionics) processor in their implanted ear. For the CI side, all settings, as well as the most used map, were exported from the patient’s private processor and imported to the study Neptune processor. This ensured that the hearing impression was close to that experienced by the patients in everyday life. Participants were asked whether the new program sounded familiar, and changes were made in the case that participants were not satisfied. This final setting reflected the expert setting for the CI in the subsequent tests. For the expert setting on the HA side, however, a completely new hearing program was created. Due to the large variety of HA manufacturers used by the participants, it was decided not to export the settings from the private HAs. Instead, an expert setting was newly created by an audiologist using the Target 7.0 fitting software (Phonak ). This fitting was carried out based on the measurements of the AudiogramDirect functionality and the use of the Adaptive Phonak Digital Bimodal (APDB) formula. This fitting formula takes into account that, in bimodal CI users, especially those with limited hearing on the contralateral side, the CI often dominates the understanding of spoken language [19]. Due to its design features, the CI mainly encodes the frequencies (1–4 kHz) that are critical for speech intelligibility. For bimodal patients, the APDB fitting formula emphasizes the audibility of low-frequency information, which complements the CI and provides temporally fine-grained structural information to support speech understanding in noisy environments. In addition, the volume enhancement and automatic gain control (AGC) features were matched between the CI processor and the HA [20]. Nevertheless, according to Digeser et al. [21], there is no consensus on the fitting strategy for HAs in bimodal users. However, there are two common approaches that prevail in the calibration of HAs for bimodal users. The first approach is also the approach followed by the APDB fitting formula. The HA is fitted to support the CI by improving audibility in low frequencies and reducing amplification in higher frequencies [22]. The second approach is to optimize the CI and HA individually for speech perception. Regardless of the fitting approach, a compensation procedure is applied, to match the perceived loudness between the CI and the HA. In this study, the loudness of the new fitting on the HA side was matched to the loudness of the CI side. This loudness adjustment was performed spontaneously and without further measurements in consultation with the participant. Subsequent fine tuning was performed, according to the needs of the participant. For both devices, only the omnidirectional microphones were activated. The CI and the HA expert settings also served as the baseline settings from which changes were made with the UIs by the participants themselves.

3.2. Self-Fitting Procedure

Depending on the group, the participants either started with the EQ UI or the 2D-Surface UI. In both cases, while different sounds were presented via a loudspeaker (soft and middle-loud speech in quiet, loud speech in noise, pop and classical music), the participants had the task of adjusting their CI and HA with regards to volume and timbre. For the condition of speech in quiet, a text spoken by a female speaker was played once with a normal voice (65 d B a ) and once with a quieter voice (50 d B a ). Loud speech in noise was realized as a recorded conversation with background babble talk, comparable to a cocktail-party situation. For the pop music conditions, the song: “The Alan Parsons Project—Limelight” was used, and for classical music the piece: “Bach—Brandenburgische Konzerte Nr. 1 In FMaj, BWV 1046”. The sound samples were presented until the participant completed the self-fitting. The sound samples were started and stopped by the supervisor. During the adjustment, the participants were placed in front of the loudspeaker at a one meter distance. Besides the laboratory-situations, a self-fitting was also conducted in two natural sound scenarios. The first location was the entrance hall of the German Hearing Center (GHC) at the Medical School Hannover. This is a large room with strong reverberation and a radio running quietly in the background. The second location was directly beside a busy street in front of the GHC. This location was outdoors and therefore occasionally faced weather-related challenges, such as wind. In both situations, the supervisor had a conversation with the participant during the self-fitting to give them a idea of their changes. At the end of each adjustment, the participants were asked how satisfied they were with the adjustment on a scale from (1) very satisfied, to (5), very dissatisfied. This was used to evaluate the spontaneous acceptance of the new settings. To check the reproducibility of the settings made with the two UIs, a re-test was conducted at the end of the study session. For this purpose, the participants repeated the self-fitting, but only for the loud speech in noise and the pop-music sound samples.

3.3. Blind Preference Test

The final settings for both UIs for each sound condition in Section 3.2 (except the two natural sound scenes) were used for a blinded A/B comparison test. This resulted in the following pairwise comparisons that were made for each of the five recorded sound conditions: 2D-surface UI vs. expert fit, EQ UI vs. expert fit, and 2D-surface UI vs. EQ UI. Subsequently, each participant was asked to decide a total of 15 times which setting they preferred. The participants could freely switch between the two settings while listening to the sound presentation, and the next sound sample was only presented when the participants had made their choice. The participant sat one meter away and in front of the loudspeaker presenting the different sounds. The order of the pairs to be compared was randomized.

4. Results

4.1. Self-Fitting Final Positions

Figure 5 shows all final slider positions s p in the form of violin plots. Each individual violin plot shows the final slider position for one fitting parameter (y-axis) for all study participants with the relevant sound sample (x-axis). The median (black point) and mean (black line) values for all final positions across all participants are also shown. The raw data presented in the violin plots can be viewed in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11 and Table A12 in Appendix A. To check whether the final position of the slider for the same parameter varied significantly between the different sound conditions, or whether the final position was independent of the sound presented, a Friedman test was performed. At a significance level of p s i g = 0.5 , significant differences were found for the volume ( p C I < 0.01 , p H A < 0.01 ) and treble on the HA side ( p H A = 0.02 ). However, no significant deviation was obtained for the bass ( p C I = 0.96 , p H A = 0.42 ), mids ( p C I = 0.14 , p H A = 0.07 ), or treble on the CI side ( p C I = 0.23 ). Due to the sample size of only n = 18 participants, the results for the Friedman test may need to be treated with caution. Regarding the comparison of the settings between HA and CI, there was a trend where the participants made separate settings for the devices, instead of finding a setting in the paired mode that worked for both sides. In the total of 126 self-fittings performed with the EQ UI (18 participants and 7 conditions), separate fitting was performed in 119 cases. To check whether the tendency of fit was the same on both sides, the Kendall agreement coefficient was calculated and tested for significance. The results showed a significant correlation for the bass ( τ = 0.17 , p = 0.03 ), mid ( τ = 0.25 , p < 0.01 ), and treble ( τ = 0.31 , p < 0.01 ) parameters at a significance level of p s i g = 0.5 . However, no significant correlation was found for the volume parameter ( τ = 0.07 , p = 0.32 ).
The same aspects were also studied for the final point positions p p x and p p y of the 2D-surface UI. Figure 6 shows the individual final positions, as well as their mean and median values. The analyses concerning the influence of the sound condition on the final point position showed that this only had a significant influence on the amplification p p y ( p C I < 0.01 , p H A < 0.01 ) but not on the chosen frequency range p p x ( p C I = 0.65 , p H A = 0.40 ). For p p y and p p x , there was no significant correlation ( τ p p y = 0.25 , τ p p x = 0.2 , p p p y = 0.25 , p p p x = 0.60 ) between the settings on the CI and HA sides.

4.2. Behavioral Self-Fitting Patterns

Another aspect of the self-fitting was the frequency distribution of the slider or point position in the parameter space during the self-fitting process. The key question to be examined is whether the area around the end position of the controls was explored more carefully than areas further away. This would mean that the participants would, after a coarse positioning of the controls, perform a more thorough refinement of the settings in their targeted area. For this purpose, every movement of the point or the four sliders was constantly captured during the self-fitting process. For the EQ UI, a kernel density estimate (KDE) was implemented for each participant over all tracked data from each single slider. Equation (3) shows how each participant’s kernel density estimate K D E s was normalized to 1, and then the K D E m e a n across all 18 participants was calculated. Figure 7 shows the resulting KDE for each slider under each condition. The vertical line marks the averaged final position over all participants from Section 4.1; Figure 8 shows the same for the 2D-Surface UI. In this case, the K D E m e a n is represented as a heatmap, in which the brighter colors indicate more activity of the patient’s finger in that particular area. The averaged final position from Section 4.1 is shown as a white dot.
K D E m e a n = s = 1 S = 18 K D E s m a x ( K D E s ) S

4.3. Self-Fitting Duration

Figure 9 shows a histogram of the time required, in seconds, for all 126 self-fittings for both UIs. Note that the adjustment was faster for the 2D-Surface UI, with an average time of 49 s (median = 41 s, 25th quartile = 29 s, 75th quartile = 59 s), than for the EQ UI, with an average time of 77 s (median = 66 s, 25th quartile = 39 s, 75th quartile = 96 s). A test for equality of the two distributions using a Wilcoxon signed rank test showed that the null hypothesis could be reject with a probability of p < 0.01 , and thus the differences between the distributions were significant. The calculation of the Pearson correlation coefficient r yielded a value of r = 0.47 and thus, according to Cohen [23], represents a medium-strong influence.

4.4. Satisfaction with the Self-Fitting Process

After each self-fitting, the participants had to rate their satisfaction with a score between 1 and 5. A score of one stands for very dissatisfied, and five points means very satisfied. With an average rating of 4.04 (median = 4, 25th quartile = 4, 75th quartile = 5) the EQ UI performed slightly worse than the 2D-Surface UI, which had a mean score of 4.11 (median = 4, 25th quartile = 4, 75th quartile = 5). However, an evaluation with a Wilcoxon signed rank test showed no significant difference.

4.5. Blind Preference Test

Within the blind comparison test, Figure 10 shows how often one setting was preferred over another. All 18 participants performed a total of 15 pairwise comparisons, resulting in a total of 270. In 55 cases, the expert fitting was chosen, in 100 cases the fitting created with the EQ UI, and in 113 cases the fitting created with the 2D Surface UI was preferred. From these choices, the preferred fitting shown in Figure 11 from each participant in each of the five acoustic scenarios was determined. Overall, the participants preferred the clinical fitting 11 times, while the self-adjusted equalizer UI fitting was selected 27 times. The program based on the 2D-Surface UI was selected 33 times. In 19 cases, the participant did not respond consistently within the conditions, also called a cyclic triad, and thus there was no clear result. To assess whether the observed number of cyclic triads were statistically significant, we compared them with a hypothetical number of cyclic triads. This number was determined using the backward cumulative binomial probability function F r e ( k , n , p ) , where k is the number of cyclic triads, n is the number of repetitions, and p is the probability of occurrence. The critical upper bound of possible cyclic triads k c r i t i c a l u p p e r is defined as
min k F r e ( k , n , p ) p s i g
where p s i g = 0.05 and p = 0.25 . For all decisions (n = 90), this resulted in a critical value of k c r i t i c a l u p p e r = 29 , which was not reached by the data. For each condition, there was a critical number of k c r i t i c a l u p p e r = 8 , which was also not reached in any sound environment. The maximum allowed number of cyclic triads per participant was k c r i t i c a l u p p e r = 3 . This number was reached by participants ID12 and ID17, indicating unreliable feedback from these two participants. The data from these two participants was therefore excluded from further analyses. A check of the remaining data for concordance according to Kendal [24,25] resulted in a coefficient of agreement of W = 0.089 . The range of possible values for W is from 0 (maximum disagreement, discordance) to 1 (maximum agreement, concordance). This result was significant at the level of 0.01. The null hypothesis of no correlation in decisions could therefore be rejected. In order to better classify the result, scaling was carried out with the Bradley–Terry–Luce model according to Wickelmaier and Schmidt [26]. This resulted in a choice probability of p = 0.18 for the clinical setting, p = 0.37 for the setting created with the EQ UI, and p = 0.44 for the 2D-Surface UI setting. Thus, there was a tendency towards the 2D Surface UI setting. An even clearer result was that, in about 80% of cases, the settings from self-fitting were preferred over the clinical settings.

4.6. Reliability of the Self-Fittings

To test the reproducibility of the self-fittings, the two sound conditions “loud speech in noise” and “pop music” had to be repeated by the participants with both UIs. For the first participant, ID01, the data for the retest were unfortunately not available. The evaluation of the reliability of the self-assessment was therefore based on the 17 remaining participants. The final positions for s p and p p can be viewed in Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20, Table A21, Table A22, Table A23 and Table A24 in Appendix B. To enable a comparison of the final positions, the data of each sound parameter p a r and each participant p from the first test t e were subtracted from the end position of the retest r e , see Equations (5) and (6). This subtraction resulted in a new range of values for the slider positions s p and point positions p p from −2 to 2. Figure 12 shows the resulting differential slider positions for the EQ UI and Figure 13 shows the differential point positions for the 2D-Surface UI. Table 4 shows the standard deviation and the mean value for the resulting distributions s p d i f f and p p d i f f for each parameter. Furthermore, the Kendall correlation coefficient of the two distributions was calculated and tested for significance. The results for the correlation of s p p a r , r e and s p p a r , t e and its probability p r o in Table 5 show that, for each parameter, the null hypothesis of no correlation could be rejected at a significance level of p s i g = 0.01 , in favor of the alternative hypothesis of an existing correlation.
The average time required for the retest self-fitting (n = 38) with the EQ UI was 67 s. In the first run, the participants needed an average of 89 s with the “loud speech in noise” and “pop music” conditions. The duration of fitting had therefore decreased. When comparing the same data for the 2D-Surface UI, the time required had increased slightly from 50 to 51 s. A test of the equality of the distributions using a Wilcoxon signed rank test showed that the null hypothesis could be rejected for the EQ UI, with a probability of p < 0.01 , and thus the differences between the distributions were significant. The calculation of the Pearson correlation coefficient resulted in a value of r = 0.46 , and thus according to Cohen [23] represents a medium-strong influence. For the 2D surface UI, the probability within the Wilcoxon signed rank test was p = 0.65 . Thus, the two distributions did not differ significantly.
s p p a r , d i f f ( p ) = s p p , p a r , r e s p p , p a r , t e
p p p a r , d i f f ( p ) = p p p , p a r , r e p p p , p a r , t e
with: D p = { p Z | 1 p 18 }

5. Discussion

Overall, the study showed that all participants were able to make adjustments to their hearing with both of the self-fitting user interfaces. There was also no evidence of an effect of participant age or duration of hearing aid use. During the self-fitting, as well as afterwards, there were no indications from the participants that they had difficulties with the fitting. This statement is supported by the data on the self-fitting behavior in Section 4.2. Therefore, no further investigation was conducted in this regard. In the test–retest procedure, the statements of Gößwein et al. [11] and Rennies et al. [17], that participants were able to make reproducible settings with the help of the 2D-Surface UI, was confirmed. Furthermore, a significant correlation was also observed with the help of the Equalizer UI. It became apparent that, for both UIs, parameters were present that were significantly different for each of the seven acoustic environments, indicating the need for situation-specific MAP-settings for different acoustic scenarios. Furthermore, the high dispersion of data within a given condition indicates how individual the respective settings were. For example, a clear pattern in the frequency adaptation of all participants within a particular condition could not be clearly identified. This finding goes hand in hand with the data from the study by Vroegop et al. [6] and shows how useful it is to make it possible for CI recipients to independently perform a situation-dependent fitting.
The existing correlation between the endpoints of the CI and HA settings for the parameters bass, mid, and treble may indicate that a homogeneous sound image was set with the expert setting on both sides, which is coherent in itself but does not necessarily correspond to the personal taste of the participants. This means that only an adaptation to personal taste took place, and therefore the tendency of the settings was the same on both sides. However, no significant correlation was present for the volume parameter. This may be because less attention was paid to the homogeneous volume perception of both devices when creating the expert fit. This probably resulted in an adjustment of the volume by the participants themselves, which led to contrasting settings.
However, independently of the side, it can be noted that, during the adjustment process, the areas with the most activity often went hand in hand with the end positions. This leads to the conclusion that a wide range of parameters were initially checked, but that a preferred range was quickly found, within which fine adjustments were subsequently made. The averaged endpoints, as well as the peaks of the averaged KDE in Figure 7 and Figure 8, were often centrally located in the parameter space. The lowest values for the K D E m e a n were frequently located in the peripheral areas of this space, but still far enough away from the overall boundaries, which leads to the conclusion that the parameter space provided was sufficiently large.
Rennies et al. [17] previously showed that self-fitting can be realized very quickly by participants with the 2D Surface UI. This can be confirmed by comparing the data in Figure 9, where it becomes apparent that an adjustment with the 2D Surface UI took significantly less time than an adjustment with the EQ UI. This aspect of a faster adjustment with the 2D Surface UI, while maintaining satisfaction with the self-fitting itself (see Section 4.5), must be considered when selecting a user interface for subsequent studies or clinical practice. This time saving can be attributed to a faster familiarization with the 2D-Surface UI but also to the fewer controls; instead of four sliders, only one point can be moved around. The data from the reliability test support this statement of a faster acclimatization and a lower training effect for the 2D-Surface UI. While the EQ UI had a significantly shorter duration in the retest, the 2D Surface UI needed almost the same time as in the first run. Another important outcome from the reliability test was that the participants were obviously able to choose a comparable setting for the same condition in the retest. This is consistent with Gößwein et al.’s [11] observations for the 2D-Surface UI. Saving and recalling a self-fitting in everyday life would, therefore, probably be useful and would minimize the time needed to achieve an optimal sound impression in recurring situations. It is even imaginable that the individually determined settings could be automatically recalled by the processor in similar situations, as the classifiers of modern CI-processors monitor the acoustic environment, meaning that the settings of the participants could be automatically fitted to the according acoustic scene by the classifier at the time when the fitting is performed.
The blind preference test and the responses to the question “How satisfied are you with your self-fitting?” indicate that CI users subjectively tended to prefer the self-fitting settings over the expert setting. However, it should be noted that, due to the wide variety of HAs manufacturers used by the participants in their everyday life, the settings of the private HAs were not transferred to the test devices. Within the study, the measurements were performed with new HA settings, to which the participants were not accustomed. The lack of a familiarization period may have had an effect on the results for the expert setting in the blind comparison test. However, as there was no familiarization phase for the two self-fitted conditions either, this aspect was not investigated further. It would be important to verify in further studies whether improvements in speech understanding can also be detected within clinical audiometric tests. According to Gößwein et al. [11] and Vroegop et al. [6], no benefit in speech understanding is expected for unilaterally fitted hearing-aid or CI users, but whether this also applies to bimodal users, where a matched perception between the two devices might play a greater role, remains to be examined. Likewise, it would be interesting to verify whether better directional hearing results from bimodal self-fitting and the associated volume matching between the two devices. Since the data in this study show that both UIs were suitable and delivered similarly good results, further research is needed to find a possible “one size fits all” solution. However, such a solution might not fully suit all patients and, during our study, it became apparent that there were different preferences for the two UIs among our study participants. In fact, some of them reported that they could imagine using the more time-consuming equalizer UI in scenarios where they have more time to make a thorough adjustment, e.g., in a theater or cinema, while the 2D- surface UI could be more favorable in a restaurant scenario, where a one-handed quick adjustment under the table might be more discrete. Since the adjustment time for the 2D-Surface UI is shorter, it would be interesting to know in which specific situations these UIs would be selected by the users. The EQ UI allows a more precise adjustment, due to the three separate frequency ranges, instead of only one tone scale. However, the adjustment is therefore also associated with a greater time expenditure. This and the previously mentioned points need to be investigated in a follow-up study.

6. Conclusions

In an experiment comparing two self-fitting methods with an expert fitting using A/B comparisons, the two self fitting methods were overall preferred over the expert setting. However, no clear favorite could be identified between the two UIs; both UIs resulted in settings that could be considered similar in terms of spontaneous acceptance and improved sound quality compared to the expert setting. Furthermore, both showed high reliability in the test–retest procedure. The time required for an adjustment was significantly shorter with the 2D-Surface UI than with the EQ UI. A closer look at the bimodal aspect showed that participants were able to compensate for differences in loudness between the CI and HA. Nevertheless, an existing correlation between the frequency adjustments of the two sides and the significant deviations of the fittings between the sound scenes showed that, not only did a compensation for differences take place, but also the sound image was adjusted to personal taste. However, this personal taste seemed to be very individual, so that no clear frequency pattern was observed for a given sound condition. This aspect illustrates the great advantage of self-fitting, because it led to a higher acceptance of the sound image in more than 80% of cases. Finally, it should be mentioned that this study was conducted exclusively with Advanced Bionics and Phonak systems. The different signal processing steps of other manufacturers could lead to deviating results.

Author Contributions

Conceptualization, J.C. and A.B.; methodology, J.C.; software, J.C.; formal analysis, S.K.; investigation, S.K.; resources, T.L.; data curation, S.K.; writing—original draft preparation, S.K.; writing—review and editing, S.K. and A.B.; visualization, S.K.; supervision, A.B.; project administration, A.B.; funding acquisition, T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Advanced Bionics GmbH. Employees of this company were involved in the conceptualization, methodology, and software development steps (see also Author Contributions). No employees of the company were involved in the data collection, data analysis, and data interpretation steps.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee of Hannover Medical School (protocol code 8711_B0_2019 approved on 22 November 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Overview of all final slider- and point-positions for each sound condition and participant (n = 18). “NaN” stands for measurement conditions that were not performed and are therefore not included in the results.
Table A1. Final Positions for s p B a s s , C I .
Table A1. Final Positions for s p B a s s , C I .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID01000.3980−0.5380−0.0270NaNNaN
ID02−0.03300.2790−0.11900.465000−0.1660
ID030.0530−0.04000.4580−0.0130−0.1190−0.4580−0.0460
ID040.33200.81600.47800.90900.78300.80300.8230
ID05−0.24600−0.5770−0.46500.9960−0.40500
ID060.58400.57100.2520−0.15900.09300.2060NaN
ID07−0.99600.9960−0.0270−0.0800−0.9960−0.45100.0200
ID080.040000.01300.033000.3920−0.1790
ID090.75700.47800.45100.61100.58400.56400.6640
ID10−0.2120−0.37800.4510−0.12600.4840−0.19900.4050
ID11−0.3780−0.5240−0.77700.67000.4710−0.6700−0.0730
ID120.0930−0.5110−0.0600−0.03300.0730−0.35200.0660
ID130.64400.69000.73700.63000.24600.39200.4840
ID14−0.22600.1390−0.15300.23900.1790−0.1860−0.2650
ID150.25200.02700.1060−0.3380−0.44500.4780NaN
ID16−0.99600.4380−0.9960−0.2120−0.2120−0.13300.1460
ID17−0.2060−0.4450−0.5310−0.6440−0.35200.27900.5710
ID180.12600.1860−0.28500.26500.46500.47100.0400
Table A2. Final Positions for s p B a s s , H A .
Table A2. Final Positions for s p B a s s , H A .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID01000.358000NaNNaN
ID0200−0.18600.465000−0.1660
ID030.0530−0.04000.4580−0.02000.5110−0.4580−0.0460
ID040.650000.23900.57100.54400.71700.8230
ID05−0.5310−0.46500.3780−0.4650−0.3980−0.59100
ID060.04000.06000.08600.212000.0930NaN
ID070.94900.1190−0.0460−0.08000.0860−0.45100.0200
ID0800000−0.42500
ID090.33800.02000.07300.41800.38500.10000.1000
ID10−0.4050−0.16600.4780−0.17900.5440−0.19900.7830
ID11−0.3780−0.5240−0.77700.67000.4710−0.6700−0.0730
ID12−0.1460−0.0460−0.5970−0.2720−0.4780−0.0200−0.1920
ID13−0.013000.3190−0.08000.1920−0.23200.0460
ID14−0.22600.1920−0.15300.23900.1790−0.1860−0.2650
ID150.04600.14600.17900−0.2460−0.1530NaN
ID16−0.99600.26500.31200.31200.51800.45800.1460
ID170.332000.06000.05300.0200−0.16600.2590
ID180.12600.1860−0.2920−0.03300.46500.47100.0400
Table A3. Final Positions for s p M i d s , C I .
Table A3. Final Positions for s p M i d s , C I .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID0100−0.392000NaNNaN
ID020000000
ID030.1000−0.02000.0800−0.09300−0.0860−0.9030
ID040.4450000000
ID050.11300−0.2790−0.4710000
ID060.67000.64400.57700.45100.61700.1330NaN
ID07−0.11300.0270−0.0730−0.0200−0.27900.9960−0.0200
ID0800000−0.03300
ID090.56400.32500.3320−0.22600.5910−0.03300.5310
ID100.50400.46500.48400.64400.60400.83000.4050
ID110.55700.59100.59700.61700.1460−0.1000−0.2120
ID120.2650−0.0800−0.3380−0.41100.22600.4110−0.0270
ID130.42500.70400.47800.52400.41800.19900.4780
ID1400−0.22600.12600.07300−0.3520
ID150.17300.15300.23900.08000.23900.1190NaN
ID160.9960−0.37200.99600.3190−0.1920−0.02700.2190
ID170.09300−0.2850−0.305000.0070−0.1660
ID180.08000.1790−0.2590−0.1330−0.1130−0.06600.2920
Table A4. Final Positions for s p M i d s , H A .
Table A4. Final Positions for s p M i d s , H A .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID0100−0.458000NaNNaN
ID020000000
ID030.1000−0.02000.0800−0.09300−0.0860−0.9030
ID040.2320000000
ID05−0.20600−0.0130−0.47100−0.43800
ID060.26500.32500.30500.43100.2590−0.0730NaN
ID070.98900.8760−0.1130−0.0400−0.11300.9960−0.0200
ID080−0.013000000
ID090.38500.1390−0.1660−0.06600.4450−0.0330−0.0400
ID100.81600.55700.39200.31900.55100.83000.7170
ID110.55700.59100.59700.61700.1460−0.1000−0.2120
ID12−0.57100.1000−0.02000.0070−0.4450−0.5640−0.4180
ID13−0.0600−0.0460−0.10600.00700.04000.19900
ID1400−0.22600.12600.07300−0.3520
ID150.17300.08000−0.13900.23900.1190NaN
ID160.99600.59700.42500.58400.39800.26500.2190
ID170.02700−0.0400−0.21200−0.1990−0.0930
ID180.08000.1790−0.2390−0.4510−0.1130−0.06600.2920
Table A5. Final Positions for s p T r e b l e , C I .
Table A5. Final Positions for s p T r e b l e , C I .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID01000.146000NaNNaN
ID02−0.0270−0.21900.15300.053000−0.5840
ID030.48400.4650−0.0860−0.5110−0.0200−0.5240−0.4510
ID04−0.5710−0.3190−0.9620−0.5110−0.2790−0.0660−0.5570
ID05−0.19200−0.3850−0.58400−0.20600
ID060.58400.59100.43800.63000.62400.0860NaN
ID07−0.9760−0.9890−0.13300.0460−0.8960−0.9090−0.1460
ID080000−0.08600.5180−0.2260
ID090.38500.30500.18600.63000.3380−0.0130−0.2720
ID100.33800.4180−0.48400.3920−0.4840−0.1590−0.6300
ID110.65700.59700.56400.68400.7370−0.15900.5310
ID12−0.5510−0.7300−0.3250−0.4980−0.08600.4710−0.5240
ID130.378000.61100.0330−0.03300.41100.3920
ID1400−0.5040−0.1330−0.113000.2260
ID15−0.2460−0.2320−0.20600.24600−0.3920NaN
ID160.99600.81600.73700.99600.9960−0.82300.0860
ID170.75700.38500.51800.50400.6900−0.3980−0.3850
ID180.22600.1000−0.2390−0.1460−0.5510−0.07300.2790
Table A6. Final Positions for s p T r e b l e , H A .
Table A6. Final Positions for s p T r e b l e , H A .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID01000.2790−0.5710−0.2460NaNNaN
ID02−0.060000.14600.053000−0.5840
ID030.48400.4650−0.0860−0.5110−0.4910−0.5240−0.4510
ID04−0.52400.4110−0.4650−0.4980−0.73700.0070−0.5570
ID05−0.41800−0.0270−0.5840−0.1920−0.51800
ID060.65000.11300.53100.49100.1130−0.0800NaN
ID070.13300.0330−0.15300.04600.9760−0.9960−0.1460
ID080.013000−0.03300−0.23900
ID090.15900.1060−0.10000.18600.1060−0.3120−0.4580
ID100.66400.5910−0.35800.6240−0.7900−0.1590−0.7300
ID110.65700.59700.56400.69700.7370−0.15900.5310
ID12−0.4110−0.4650−0.6110−0.1460−0.5840−0.6700−0.8630
ID1300.5440−0.02700.6370−0.00700.41100.3920
ID1400−0.5040−0.1330−0.113000.2260
ID150.2060−0.19200.14600.25900.13900.1190NaN
ID160.9960−0.06000.95600.02700.41800.10600.0930
ID17−0.33800−0.27900.2320−0.3780−0.04000.1730
ID180.21200.1000−0.2460−0.1260−0.5510−0.07300.2790
Table A7. Final Positions for s p V o l u m e , C I .
Table A7. Final Positions for s p V o l u m e , C I .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID010.13600000NaNNaN
ID020.44700.0960−0.6540−0.287000.28700
ID03−0.2430−0.2790−0.8820−0.3630−0.2120v0.04400.3350
ID040.85000.1680−0.04000.26300.08400.23100.4350
ID050.3270−0.4390−0.9100−0.5830−0.9980−0.9980−0.2430
ID060.42300.38700.80600.32300.77000.8820NaN
ID070.9380−0.2390−0.93400.0520−0.587000
ID0800−0.48300.06800.08000.2000−0.3390
ID090.42300.19200.21200.35900.13200.37100.4070
ID10−0.18400.16400.26300.49900.41100.45100.3430
ID110.34700−0.3630−0.2750−0.16000.4670−0.1120
ID120.76200.18800.47500.23900.33500.75400.9940
ID130.93000.10400.12400.21900.9660−0.15600.9860
ID140.64700.1240−0.4790−0.2430−0.01600.29100.4150
ID150.20000.12800.18800.04800.09200.3430NaN
ID16−0.99800.41100.64300.69800.6110−0.63500.9980
ID170.259000.29500.124000.38700
ID180.363000.34700.571000.16400.2430
Table A8. Final Positions for s p V o l u m e , H A .
Table A8. Final Positions for s p V o l u m e , H A .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID010.12000000NaNNaN
ID020.12000.0880−0.6540−0.287000.28700
ID030.1520−0.1040−0.88200.15600.26700.34300.5870
ID040.81800.17600.12400.02800.23500.23500.4350
ID050.74600.82200.31500.49900.74200.82600.6310
ID060.5510−0.14400.16800.01600.21200.1440NaN
ID070.9340−0.16400.10800.07600.076000
ID0800.0080−0.770000−0.0160−0.3710
ID090.0960−0.0080−0.08400.0560−0.03200.06800.1160
ID10−0.18400.16400.10800.49900.41100.45100.3070
ID110.34700−0.3630−0.2750−0.68200.0600−0.5390
ID120.2430−0.1400−0.1600−0.4190−0.20400.24300.4030
ID13−0.1480−0.1800−0.0360−0.44300.0680−0.15600
ID140.40300.0240−0.4790−0.2430−0.20800.29100.4150
ID15−0.1800−0.2270−0.2350−0.2310−0.0840−0.0320NaN
ID16−0.99800.16800.2910−0.0520−0.03600.73400.9980
ID170.37900−0.04000.243000.45100
ID18−0.3750−0.5870−0.0880−0.0880−0.53900.1640−0.2080
Table A9. Final Positions for p p X , C I .
Table A9. Final Positions for p p X , C I .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID01−0.53000.1880−0.0070−0.5680−0.7250NaNNaN
ID02−0.7560−0.2840−0.2320−0.5440−0.1500−0.4070−0.5500
ID030.12300.75600.34500.32800.49600.23900.6940
ID04−0.0410−0.0030−0.3930−0.7560−0.5850−0.8170−0.6970
ID050.54400−0.1300−0.4270−0.3690−0.12600.3010
ID060.74900.72100.3860−0.04400.08900.6870NaN
ID07−0.7320−0.7930−0.8440−0.6970−0.7420−0.7110−0.8000
ID08−0.5260−0.1330−0.6360−0.4340−0.0340−0.1780−0.6050
ID09−0.4680−0.5400−0.6500−0.7250−0.6120−0.6050−0.6430
ID100.4960−0.00300.13300.51600.23200.1440−0.1300
ID11−0.73800.47900.6290−0.81400.67700.49200.4580
ID12−0.8380−0.8960−0.7620−0.7730−0.9300−0.5850−0.8680
ID13−0.9710−0.9570−0.7250−0.8750−0.9710−0.8920−0.7970
ID14−0.0550−0.0030−0.06500.335000.12300.1330
ID150.60200.13300.14700.53000.4920−0.5300−0.4510
ID160.7320−0.4620−0.89600.24300.74900.05100.5400
ID17−0.7150−0.6530−0.8240−0.5780−0.39300.4410−0.7900
ID180.6970−0.1470−0.48200.5910−0.15000.0030−0.0750
Table A10. Final Positions for p p X , H A .
Table A10. Final Positions for p p X , H A .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID0100−0.34200.18500NaNNaN
ID02−0.7560−0.2840−0.2320−0.5440−0.1500−0.4070−0.5500
ID03−0.34200.7760−0.0030−0.3320−0.1130−0.02100.0680
ID04−0.0410−0.00300.3930−0.4650−0.5850−0.8170−0.6970
ID05−0.3380−0.0440−0.4820−0.2360−0.1680−0.6840−0.4620
ID06−0.7040−0.4310−0.6360−0.0100−0.3620−0.6840NaN
ID07−0.7320−0.7930−0.8440−0.6970−0.7930−0.7110−0.8000
ID080.67400.14400.68000.591000.19500.5880
ID090.27400.08900.24600.0990−0.23600.55400.5470
ID10−0.1400−0.08500.18500.2700−0.10300.04800.1260
ID110.75900.50900.61200.75900.68000.49200.4580
ID12−0.5440−0.7150−0.2260−0.3320−0.4550−0.6870−0.3830
ID13−0.1950−0.1570−0.0510−0.3760−0.4650−0.3250−0.1470
ID14−0.0550−0.0030−0.06500.031000.12300.1330
ID15−0.1370−0.4720−0.25000−0.28000.04800.0440
ID160.99800.81400.85100.92600.13000.82700.5400
ID170.79000.71100.81700.62900.3380−0.60200.8140
ID180.00300.1950−0.8100−0.5200−0.37300.0030−0.0750
Table A11. Final Positions for p p Y , C I .
Table A11. Final Positions for p p Y , C I .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID010.5400−0.10900.66900.55700.7850NaNNaN
ID020.50700.0300−0.36100.04300.09300.1030−0.2320
ID030.2750−0.1560−0.8550−0.4840−0.30100.26200.7390
ID040.7880−0.2550−0.71200.1420−0.29800.7820−0.4240
ID050.782000.8350−0.48700.76500.72200.1820
ID06−0.16900.70600.18600.21900.21200.7520NaN
ID07−0.3180−0.0170−0.0990−0.1460−0.0270−0.05300.4080
ID08−0.5040−0.0600−0.74200.2290−0.06600.0130−0.4740
ID090.31100.11600.09600.25800.22500.47700.4670
ID100.92400.51000.41400.73500.55000.61600.6060
ID110.8080−0.3050−0.54300.5100−0.0660−0.18200.1990
ID120.89100.67300.27200.51400.54300.85500.8480
ID130.89500.93100.73500.82800.86100.84200.8280
ID140.26500.1360−0.18600.202000.28200
ID15−0.2950−0.1620−0.0100−0.2420−0.24800.3880−0.2980
ID160.07000.73200.93100.8880−0.01700.83200.1560
ID170.55000.3110−0.12300.53000.43100.05000.7450
ID180.0130−0.3310−0.3150−0.06300.28200.18600.1860
Table A12. Final Positions for p p Y , H A .
Table A12. Final Positions for p p Y , H A .
Soft SpeechMedium SpeechLoud SpeechClassic MusicPop MusicEntrance HallStreet
ID0100−0.4770−0.26200NaNNaN
ID020.50700.0300−0.36100.04300.09300.1030−0.2320
ID030.49700.6890−0.0360−0.34100.05600.61300.6660
ID040.7880−0.25500.6290−0.4740−0.29800.7820−0.4240
ID05−0.3480−0.5230−0.59000.5730−0.6160−0.7350−0.4240
ID060.60000.63300.03000.17200.1760−0.0300NaN
ID070.3740−0.0170−0.0990−0.1460−0.0270−0.05300.4080
ID08−0.5370−0.1390−0.68200.229000.1060−0.4340
ID090.08600.0270−0.20900.04300.0270−0.0560−0.3410
ID100.86500.70200.45100.51400.16600.29500.4110
ID110.77500.0500−0.25800.17200.2780−0.18200.1990
ID120.05300.0890−0.1790−0.5470−0.3740−0.37100.2150
ID13−0.0990−0.14600.0270−0.3350−0.0830−0.08600.2720
ID140.25800.1360−0.1860−0.046000.28200
ID150.0730−0.46700.15600−0.07600.05000.0530
ID160.88100.81200.9970−0.03300.42400.76900.1560
ID17−0.0500−0.2820−0.6200−0.0990−0.28200.76500.0930
ID18−0.3310−0.3540−0.6290−0.2750−0.25800.18600.1860

Appendix B

Overview of all final slider and point positions for each participant (n = 18) and the retest sound conditions: “loud speech in noise” and “pop music”. “NaN” stands for measurement conditions that were not performed and were therefore not included in the results.
Table A13. Final Positions for s p B a s s , C I .
Table A13. Final Positions for s p B a s s , C I .
Loud SpeechPop Music
ID01NaNNaN
ID02−0.38500
ID03−0.3920−0.3580
ID040.78300.7570
ID05−0.44500.1390
ID06−0.08000.0070
ID070.79600.2790
ID0800.0070
ID090.45800.4250
ID100.15300.5310
ID11−0.43100.1190
ID12−0.2260−0.2520
ID130.79600.7230
ID1400.1590
ID150.3050−0.3190
ID160.26500.2650
ID170.71700.5310
ID18−0.13900.0070
Table A14. Final Positions for s p B a s s , H A .
Table A14. Final Positions for s p B a s s , H A .
Loud SpeechPop Music
ID01NaNNaN
ID020.44500
ID03−0.3920−0.2120
ID040.78300.3920
ID05−0.71700
ID060.3320−0.3050
ID070.79600.3320
ID080−0.2460
ID0900.1990
ID100.17900.7830
ID11−0.43100.1190
ID12−0.7960−0.4450
ID130.30500.1000
ID1400.1590
ID15−0.16600.2520
ID160.26500.2650
ID17−0.35800.2120
ID18−0.13900.0070
Table A15. Final Positions for s p M i d s , C I .
Table A15. Final Positions for s p M i d s , C I .
Loud SpeechPop Music
ID01NaNNaN
ID0200
ID03−0.20600
ID040.35200
ID0500
ID060.21200.0660
ID07−0.5910−0.2260
ID0800
ID090.33800.3720
ID100.12600.1000
ID110.11300.3520
ID12−0.39800.0330
ID130.82300.1190
ID1400
ID150.11900.1530
ID160.08000.5380
ID17−0.2990−0.3450
ID18−0.2320−0.0270
Table A16. Final Positions for s p M i d s , H A .
Table A16. Final Positions for s p M i d s , H A .
Loud SpeechPop Music
ID01NaNNaN
ID0200
ID03−0.20600
ID040.35200
ID05−0.23900
ID060.62400.5110
ID07−0.5910−0.2320
ID0800
ID0900.0600
ID100.4840−0.1660
ID110.11300.3520
ID12−0.7630−0.2720
ID130.57700.6370
ID1400
ID150.11900.1530
ID160.08000.5380
ID170−0.1660
ID18−0.2990−0.0270
Table A17. Final Positions for s p T r e b l e , C I .
Table A17. Final Positions for s p T r e b l e , C I .
Loud SpeechPop Music
ID01NaNNaN
ID0200
ID03−0.2720−0.5380
ID04−0.4110−0.5640
ID050−0.5840
ID060.51100.5970
ID07−0.6640−0.3380
ID08−0.10600
ID09−0.1590−0.2060
ID10−0.3650−0.0730
ID110.67700.5180
ID12−0.2060−0.0070
ID130.62400.2920
ID1400
ID15−0.2390−0.2720
ID160.33200.4580
ID17−0.5040−0.1590
ID18−0.3120−0.0860
Table A18. Final Positions for s p T r e b l e , H A .
Table A18. Final Positions for s p T r e b l e , H A .
Loud SpeechPop Music
ID01NaNNaN
ID020.41100
ID03−0.27200.1260
ID04−0.48400
ID0500
ID06−0.13900.5710
ID07−0.6640−0.3380
ID080.0070−0.0130
ID09−0.2520−0.3920
ID10−0.6170−0.4310
ID110.67700.4710
ID12−0.6640−0.6700
ID130.6240−0.2060
ID1400
ID15−0.07300.2260
ID160.33200.4580
ID17−0.59700.3190
ID18−0.5180−0.0860
Table A19. Final Positions for s p V o l u m e , C I .
Table A19. Final Positions for s p V o l u m e , C I .
Loud SpeechPop Music
ID01NaNNaN
ID02−0.31900.2830
ID03−0.6860−0.2550
ID04−0.29900.5230
ID050.88200.8740
ID060.66200.4510
ID070−0.1720
ID080.06400
ID090.29500.3350
ID100.20800
ID110.09600.0240
ID120.22300.2910
ID13−0.22700.6390
ID1400.0120
ID150.11600.2670
ID160.47100.4910
ID1700
ID180.19200.1840
Table A20. Final Positions for s p V o l u m e , H A .
Table A20. Final Positions for s p V o l u m e , H A .
Loud SpeechPop Music
ID01NaNNaN
ID02−0.3190−0.0200
ID03−0.26300.1280
ID040.10400.2910
ID05−0.99000
ID06−0.3990−0.1760
ID070−0.1720
ID08−0.2310−0.0280
ID09−0.02000
ID100.25100.2000
ID11−0.4350−0.4790
ID12−0.2230−0.1280
ID13−0.8500−0.0600
ID1400.0120
ID15−0.2310−0.2590
ID160.47100.4910
ID1700
ID18−0.4510−0.1800
Table A21. Final Positions for p p X , C I .
Table A21. Final Positions for p p X , C I .
Loud SpeechPop Music
ID01NaNNaN
ID02−0.3660−0.5230
ID030.31100.0510
ID04−0.6430−0.2770
ID05−0.06800.0750
ID060.74500.7790
ID07−0.1300−0.8100
ID080.0340−0.0890
ID09−0.4960−0.5440
ID10−0.47900.0850
ID110.41000.3380
ID12−0.7010−0.5640
ID13−0.4990−0.7350
ID14−0.12600.1950
ID150.1710−0.4480
ID160.84400.3860
ID17−0.7180−0.7250
ID18−0.41700.0340
Table A22. Final Positions for p p X , H A .
Table A22. Final Positions for p p X , H A .
Loud SpeechPop Music
ID01NaNNaN
ID02−0.3660−0.5230
ID030.1060−0.1060
ID04−0.6430−0.2770
ID05−0.4480−0.4620
ID06−0.7250−0.6940
ID07−0.1300−0.8100
ID080.32500.1400
ID090.21900.3620
ID10−0.23600.0480
ID110.39300.3760
ID12−0.5950−0.4620
ID13−0.1640−0.2190
ID14−0.12600.1950
ID15v0.4240−0.1200
ID160.84400.5330
ID170.76600.7150
ID18−0.41700.0340
Table A23. Final Positions for p p Y , C I .
Table A23. Final Positions for p p Y , C I .
Loud SpeechPop Music
ID01NaNNaN
ID020.13900.1950
ID03−0.9110−0.2350
ID04−0.73900.0930
ID050.82800.5800
ID060.05600.7390
ID07−0.37400.1290
ID080.0300−0.0730
ID090.40100.5700
ID100.51000.2090
ID11−0.11300.6000
ID120.26800.3680
ID130.33100.7290
ID14−0.00700.0100
ID15−0.47700.3350
ID160.77500.6390
ID17−0.07000.6200
ID18−0.3150−0.1420
Table A24. Final Positions for p p Y , H A .
Table A24. Final Positions for p p Y , H A .
Loud SpeechPop Music
ID01NaNNaN
ID020.13900.1950
ID03−0.8120−0.0660
ID04−0.73900.0930
ID05−0.6760−0.4110
ID060.33500.1920
ID07−0.37400.1290
ID080.0300−0.0630
ID090.04300.0430
ID100.30100.3180
ID11−0.58600.1590
ID12−0.2150−0.1520
ID13−0.03600.1230
ID14−0.00700.0100
ID15−0.1290−0.0270
ID160.45700.3110
ID17−0.3640−0.1660
ID18−0.3150−0.1420

References

  1. Lai, Y.H.; Liu, T.C.; Li, P.C.; Shih, W.T.; Young, S.T. Development and Preliminary Verification of a Mandarin-Based Hearing-Aid Fitting Strategy. PLoS ONE 2013, 8, e80831. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Scollie, S.; Seewald, R.; Cornelisse, L.; Moodie, S.; Bagatto, M.; Laurnagaray, D.; Beaulac, S.; Pumford, J. The Desired Sensation Level Multistage Input/Output Algorithm. Trends Amplif. 2005, 9, 159–197. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Roditi, R.E.; Poissant, S.F.; Bero, E.M.; Lee, D.J. A predictive modelof cochlear implant performance in postlingually deafened adults. Otol. Neurotol. 2009, 30, 449–454. [Google Scholar] [CrossRef] [PubMed]
  4. Budenz, C.L.; Cosetti, M.K.; Coelho, D.H.; Birenbaum, B.; Babb, J.; Waltzman, S.B.; Roehm, P.C. The effects of cochlear implantation on speech perception in older adults. J. Am. Geriatr. Soc. 2011, 59, 446–453. [Google Scholar] [CrossRef] [PubMed]
  5. Holden, L.K.; Finley, C.C.; Firszt, J.B.; Holden, T.A.; Brenner, C.; Potts, L.G.; Gotter, B.D.; Vanderhoof, S.S.; Mispagel, K.; Heydebrand, G.; et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear Hear. 2013, 34, 342–360. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Vroegop, J.L.; Dingemanse, J.G.; van der Schroeff, M.P.; Metselaar, R.M.; Goedegebure, A. Self-Adjustment of Upper Electrical Stimulation Levels in CI Programming and the Effect on Auditory Functioning. Ear Hear. 2017, 38, e232–e240. [Google Scholar] [CrossRef] [PubMed]
  7. Gygi, B.; Hall, D.A. Background sounds and hearing-aid users: A scoping review. Int. J. Audiol. 2016, 55, 1–10. [Google Scholar] [CrossRef] [PubMed]
  8. Lazard, D.S.; Vincent, C.; Venail, F.; Van de Heyning, P.; Truy, E.; Sterkers, O.; Skarzynski, P.H.; Skarzynski, H.; Schauwers, K.; O’Leary, S.; et al. Pre-, Per- and Postoperative Factors Affecting Performance of Postlinguistically Deaf Adults Using Cochlear Implants: A New Conceptual Model over Time. PLoS ONE 2012, 7, e48739. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Potts, L.G.; Skinner, M.W.; Litovsky, R.A.; Strube, M.J.; Kuk, F. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). J. Am. Acad. Audiol. 2009, 20, 353–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Sucher, C.M.; McDermott, H.J. Bimodal stimulation: Benefits for music perception and sound quality. Cochlear Implant. Int. 2009, 10 (Suppl. 1), 96–99. [Google Scholar] [CrossRef]
  11. Gößwein, J.A.; Huber, R.; Bruns, T.; Rennies, J.; Kollmeier, B. Audiologist-Supervised Self-Fitting Fine Tuning of Hearing Aids. In Proceedings of the Jahrestagung der Deutschen Gesellschaft für Audiologie, Halle, Germany, 28 February–3 March 2018. Conference Paper 21. [Google Scholar]
  12. Vaerenberg, B.; Smits, C.; De Ceulaer, G.; Zir, E.; Harman, S.; Jaspers, N.; Tam, Y.; Dillon, M.; Wesarg, T.; Martin-Bonniot, D.; et al. Cochlear Implant Programming: A Global Survey on the State of the Art. Sci. World J. 2004, 2014, 501738. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Botros, A.M.; Banna, R.; Maruthurkkaral, S. The next generation of Nucleus fitting: A multiplatform approach towards universal cochlear implant management. Int. J. Audiol. 2013, 52, 485–494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Sonova, A.G. myPhonak 4.0 Gebrauchsanweisung Version 1.02/2020-06/NLG. 2020. Available online: https://www.phonak.com/content/dam/phonakpro/gc_hq/de/products_solutions/eSolutions/apps/myphonak/documents/User_Guide_myphonak_app.pdf (accessed on 17 April 2023).
  15. Cochlear Limited. Nucleus Smart App User Guide P1285539 Version 3.0. 2020. Available online: https://www.cochlear.com/4859da8b-1208-45fa-9b51-068522aaf83d/D1654887_4-01_EN_CP1150_Android_App_UG_EMEA_WEB.pdf (accessed on 17 April 2023).
  16. Dreschler, W.A.; Keidser, G.; Convery, E.; Dillon, H. Client-based adjustments of hearing aid gain: The effect of different control configurations. Ear Hear. 2008, 29, 214–227. [Google Scholar] [CrossRef] [PubMed]
  17. Rennies, J.; Oetting, D.; Baumgartner, H.; Appell, J.E. User-interface concepts for sound personalization in headphones, Audio Engineering Society. In Proceedings of the Conference of Headphone Technology 2016, Aalborg, Denmark, 24–26 August 2016; pp. 24–26. [Google Scholar]
  18. Li, T.; Yu, T.; Hawkins, B.S.; Dickersin, K. Design, Analysis, and Reporting of Crossover Trials for Inclusion in a Meta-Analysis. PloS ONE 2015, 10, e0133023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Ching, T.Y.C.; Incerti, P.; Hill, M. Binaural Benefits for Adults Who Use Hearing Aids and Cochlear Implants in Opposite Ears. Ear Hear. 2004, 25, 9–21. [Google Scholar] [CrossRef] [PubMed]
  20. Auletta, G.; Franzè, A.; Laria, C.; Piccolo, C.; Papa, C.; Riccardi, P.; Pisani, D.; Sarnelli, A.; Del Vecchio, V.; Malesci, R.; et al. Integrated Bimodal Fitting for Unilateral CI Users with Residual Contralateral Hearing. Audiol. Res. 2021, 11, 200–206. [Google Scholar] [CrossRef] [PubMed]
  21. Digeser, F.M.; Englerm, M.; Hoppe, U. Comparison of bimodal benefit for the use of DSL v5.0 and NAL-NL2 in cochlear implant listeners. Int. J. Audiol. 2020, 59, 383–391. [Google Scholar] [CrossRef] [PubMed]
  22. Dorman, M.F.; Loizou, P.; Wang, S.; Zhang, T.; Spahr, A.; Loiselle, L.; Cook, S. Bimodal cochlear implants: The role of acoustic signal level in determining speech perception benefit. Audiol. Neuro-Otol. 2014, 19, 234–238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Taylor and Francis Group: Hoboken, NJ, USA, 1988. [Google Scholar]
  24. Kendall, M. Rank Correlation Methods, 4th ed.; Charles Griffin & Company Ltd.: London, UK, 1975. [Google Scholar]
  25. Kendall, M.; Babington, S.B. On the Method of Paired Comparisons. Biometrika 1940, 31, 324–345. [Google Scholar] [CrossRef]
  26. Wickelmaier, F.; Schmidt, C. A Matlab function to estimate choice model parameters from paired-comparison data. Behav. Res. Methods Instrum. Comput. 2004, 36, 29–40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. (left) Audiograms of all 18 participants for the HA side (dashed lines), as well as the mean (continuous line) of all participants. (right) Audiograms of all 18 participants as boxplot with the 25th and 75th quartile, as well as the median value and the mean (continuous line) of all participants.
Figure 1. (left) Audiograms of all 18 participants for the HA side (dashed lines), as well as the mean (continuous line) of all participants. (right) Audiograms of all 18 participants as boxplot with the 25th and 75th quartile, as well as the median value and the mean (continuous line) of all participants.
Applsci 13 08411 g001
Figure 2. (a) The 2D-Surface user interface with its two movable pointers for the settings of each side and the possibility to switch between left, right, and coupled mode. (b) The EQ user interface is shown with its four sliders for the settings and the possibility to switch between left, right, and coupled mode.
Figure 2. (a) The 2D-Surface user interface with its two movable pointers for the settings of each side and the possibility to switch between left, right, and coupled mode. (b) The EQ user interface is shown with its four sliders for the settings and the possibility to switch between left, right, and coupled mode.
Applsci 13 08411 g002
Figure 3. δ Gains for the 2D-Surface UI: Example of the gain curve for different point positions p p x along the x-axis with a fixed value for p p y = 0.5. While p p x = 1 gives maximum attenuation of the high frequencies, p p x = 1 results in maximum amplification of the high frequencies.
Figure 3. δ Gains for the 2D-Surface UI: Example of the gain curve for different point positions p p x along the x-axis with a fixed value for p p y = 0.5. While p p x = 1 gives maximum attenuation of the high frequencies, p p x = 1 results in maximum amplification of the high frequencies.
Applsci 13 08411 g003
Figure 4. Study procedure: The flow of the study appointments, from top to bottom.
Figure 4. Study procedure: The flow of the study appointments, from top to bottom.
Applsci 13 08411 g004
Figure 5. Final Positions EQ UI: Violin plot of all final positions (n = 18) for every slider of the EQ UI in the different sound conditions: soft speech (soft sp), middle-loud speech in quiet (med sp), loud speech in noise (loud sp), classic music (cla mus), pop music (pop mus), entrance hall (ent hall), and beside a busy street (str). The black line shows the mean value and the black point the median value.
Figure 5. Final Positions EQ UI: Violin plot of all final positions (n = 18) for every slider of the EQ UI in the different sound conditions: soft speech (soft sp), middle-loud speech in quiet (med sp), loud speech in noise (loud sp), classic music (cla mus), pop music (pop mus), entrance hall (ent hall), and beside a busy street (str). The black line shows the mean value and the black point the median value.
Applsci 13 08411 g005
Figure 6. Final Positions 2D-Surface UI: Violin Plot of all final positions (n = 18) for both directions p p x and p p y of the point in the 2D-Surface UI for all sound conditions: soft speech (soft sp), middle-loud speech in quiet (med sp), loud speech in noise (loud sp), classic music (cla mus), pop music (pop mus), entrance hall (ent hall), and beside a busy street (str). The black line shows the mean value and the black point the median value.
Figure 6. Final Positions 2D-Surface UI: Violin Plot of all final positions (n = 18) for both directions p p x and p p y of the point in the 2D-Surface UI for all sound conditions: soft speech (soft sp), middle-loud speech in quiet (med sp), loud speech in noise (loud sp), classic music (cla mus), pop music (pop mus), entrance hall (ent hall), and beside a busy street (str). The black line shows the mean value and the black point the median value.
Applsci 13 08411 g006
Figure 7. Mean KDE for EQ: The resulting K D E m e a n for each slider in each condition and device. The vertical line marks the averaged final position for all 18 participants.
Figure 7. Mean KDE for EQ: The resulting K D E m e a n for each slider in each condition and device. The vertical line marks the averaged final position for all 18 participants.
Applsci 13 08411 g007
Figure 8. Mean KDE for 2D-Surface: The resulting K D E m e a n is represented as a heatmap, in which the brighter the area, the more often the point was moved there. The averaged final position is shown as a white dot.
Figure 8. Mean KDE for 2D-Surface: The resulting K D E m e a n is represented as a heatmap, in which the brighter the area, the more often the point was moved there. The averaged final position is shown as a white dot.
Applsci 13 08411 g008
Figure 9. Self-Fitting Duration: Histogram of the required time in seconds for all self-fittings for the two user interfaces.
Figure 9. Self-Fitting Duration: Histogram of the required time in seconds for all self-fittings for the two user interfaces.
Applsci 13 08411 g009
Figure 10. Blind preference decisions: The table shows how often one setting was preferred over another in the blind comparison test.
Figure 10. Blind preference decisions: The table shows how often one setting was preferred over another in the blind comparison test.
Applsci 13 08411 g010
Figure 11. Blind preference test results: The table shows the participant’s preferred setting for each condition. “undecided” means that the participant did not answer consistently, in which case, a winner could not be determined.
Figure 11. Blind preference test results: The table shows the participant’s preferred setting for each condition. “undecided” means that the participant did not answer consistently, in which case, a winner could not be determined.
Applsci 13 08411 g011
Figure 12. Violin plot of the differential slider positions of the test–retest conditions for the EQ UI. The black line shows the mean value, and the black point shows the median value.
Figure 12. Violin plot of the differential slider positions of the test–retest conditions for the EQ UI. The black line shows the mean value, and the black point shows the median value.
Applsci 13 08411 g012
Figure 13. Violin plot of the differential point positions of the test–retest conditions for the 2D-Surface UI. The black line shows the mean value, and the black point shows the median value.
Figure 13. Violin plot of the differential point positions of the test–retest conditions for the 2D-Surface UI. The black line shows the mean value, and the black point shows the median value.
Applsci 13 08411 g013
Table 1. Short overview of the demographics of all 18 participants given in years.
Table 1. Short overview of the demographics of all 18 participants given in years.
Mean (Years)Min (Years)Max (Years)
Age at enrollment65.75477
Duration of CI use4.7311
Duration of HA use20.6560
Duration of hearing impairment (first supplied side)32.8867
Duration of profound hearing loss (CI side)9.9366
Table 2. Supply per side for each participant.
Table 2. Supply per side for each participant.
Subject ID010203040506070809101112131415161718
leftCICIHACIHAHACICICICICICICIHACICICIHA
rightHAHACIHACICIHAHAHAHAHAHAHACIHAHAHACI
Table 3. Center frequencies of the STFT filterbanks and normalized value tables for the self-fitting adjustments.
Table 3. Center frequencies of the STFT filterbanks and normalized value tables for the self-fitting adjustments.
Filter Bank Bin12345678910
center frequency [Hz]17234551768986110341206137815501723
EQ-bass0.96670.80.46670.13330.00.00.00.00.00.0
EQ-mid0.03330.20.53330.86671.00.970.930.870.730.6
EQ-treble0.00.00.00.00.00.030.070.130.270.4
2D-Surface−1.0−1.0−0.8416−0.5918−0.3979−0.2396−0.10570.01030.15950.3255
Filter Bank Bin11121314151617181920
center frequency [Hz]1981232626703015344540484823594374949647
EQ-bass0.00.00.00.00.00.00.00.00.00.0
EQ-mid0.470.370.230.130.06670.030.030.00.00.0
EQ-treble0.530.630.770.870.93330.970.971.01.01.0
2D-Surface0.46480.58480.71470.84860.98251.01.01.01.01.0
Table 4. Properties of the differential distribution of the test–retest distributions.
Table 4. Properties of the differential distribution of the test–retest distributions.
ParameterStdMean
p p x , d i f f 0.41−0.05
p p y , d i f f 0.37−0.03
s p T r e b l e , d i f f 0.410.05
s p M i d , d i f f 0.320.04
s p B a s s , d i f f 0.46−0.06
s p V o l u m e , d i f f 0.46−0.06
Table 5. Comparison of the test–retest distributions.
Table 5. Comparison of the test–retest distributions.
ParameterCorrPro
p p x , t e and p p x , r e 0.62< 0.01
p p y , t e and p p y , r e 0.59< 0.01
s p T r e b l e , t e and p p T r e b l e , r e 0.52< 0.01
s p M i d , t e and p p M i d , r e 0.41< 0.01
s p B a s s , t e and p p B a s s , r e 0.31< 0.01
s p V o l u m e , t e and p p V o l u m e , r e 0.31< 0.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kliesch, S.; Chalupper, J.; Lenarz, T.; Büchner, A. Evaluation of Two Self-Fitting User Interfaces for Bimodal CI-Recipients. Appl. Sci. 2023, 13, 8411. https://doi.org/10.3390/app13148411

AMA Style

Kliesch S, Chalupper J, Lenarz T, Büchner A. Evaluation of Two Self-Fitting User Interfaces for Bimodal CI-Recipients. Applied Sciences. 2023; 13(14):8411. https://doi.org/10.3390/app13148411

Chicago/Turabian Style

Kliesch, Sven, Josef Chalupper, Thomas Lenarz, and Andreas Büchner. 2023. "Evaluation of Two Self-Fitting User Interfaces for Bimodal CI-Recipients" Applied Sciences 13, no. 14: 8411. https://doi.org/10.3390/app13148411

APA Style

Kliesch, S., Chalupper, J., Lenarz, T., & Büchner, A. (2023). Evaluation of Two Self-Fitting User Interfaces for Bimodal CI-Recipients. Applied Sciences, 13(14), 8411. https://doi.org/10.3390/app13148411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop