Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Previous binaural experience supports compensatory strategies in hearing-impaired children’s auditory horizontal localization

  • Andrea Gulli ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Visualization, Writing – original draft

    andrea.gulli@unipd.it

    Affiliation Department of Engineering and Management, University of Padua, Padua, Italy

  • Federico Fontana,

    Roles Conceptualization, Formal analysis, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation HCI Lab, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy

  • Alessandro Aruffo,

    Roles Investigation, Software

    Affiliation Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy

  • Eva Orzan ,

    Contributed equally to this work with: Eva Orzan, Enrico Muzzi

    Roles Funding acquisition, Project administration, Resources

    Affiliation Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy

  • Enrico Muzzi

    Contributed equally to this work with: Eva Orzan, Enrico Muzzi

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Resources, Supervision, Validation, Writing – review & editing

    Affiliation Otorhinolaryngology and Audiology, Institute for Maternal and Child Health IRCCS “Burlo Garofolo”, Trieste, Italy

Abstract

This study investigates auditory localization in children with a diagnosis of hearing impairment rehabilitated with bilateral cochlear implants or hearing aids. Localization accuracy in the anterior horizontal field and its distribution along the angular position of the source were analyzed. Participants performed a localization task in a virtual environment where they could move their heads freely and were asked to point to an invisible sound source. The source was rendered using a loudspeaker set arranged as a semi-circular array in the horizontal plane. The participants’ head positions were tracked while their hands pointed to the auditory target; the preferred listening position and the onset of active strategies involving head movement were extracted. A significant correlation was found between age and localization accuracy and age and head movement in children with bilateral hearing aids. Investigating conditions where no, one, or both hearing devices were turned off, it was found that asymmetrical hearing caused the largest errors. Under this specific condition, head movement was used erratically by children with bilateral cochlear implants who focused on postures maximizing sound intensity at the more sensitive ear. Conversely, those with a consolidated binaural hearing experience could use dynamic cues even if one hearing aid was turned off. This finding may have implications for the clinical evaluation and rehabilitation of individuals with hearing impairments.

Introduction

Spatial hearing is an essential component of auditory scene analysis, primarily aimed at localizing acoustic events, segregating auditory streams, and orienting multisensory attention [1]. Sound localization is based on sound directionality and distance estimation derived from the binaural processing of loudness, time delay, phase, and spectral cues. The most important cues are the interaural time difference (ITD) and the interaural level difference (ILD), determined by the spatial separation of the two ears [2]. Useful information for localization can also be extracted from monaural cues, such as time and level differences between individual spectral components. Providing identical signals at both ears proved that monaural cues help define the anterior and posterior sectors of the midplane, the elevation angle, and the distance of the auditory event [3]. Slight head and body movements, taking place even when trying to keep still while listening, effectively resolve front-back confusion [4] and improve the precision of auditory spatial recognition [5].

Spontaneous head movements in response to auditory cues enhance horizontal localization by turning static binaural cues into dynamic information [6]. This dynamic exploration of acoustic space has been defined as active listening [7]. Nevertheless, a common principle underlying the different head dynamics observed during sound localization has yet to be found [8]. Head movement patterns differ among individuals, suggesting that every active localization strategy entails subjective rotational and translational movements involving the torso, head, and eyes [9]. Therefore, monitoring head movements during acoustic localization in different populations is not obvious. Normal-hearing (NH) individuals in a simulated asymmetric hearing loss condition possibly exploit their binaural experience [10] and behave differently from a hearing-impaired (HI) population. An elderly population, NH or HI alike, can localize a sound source more accurately through active listening than children [11].

Hearing loss impacts every facet of auditory perception, including the ability to accurately determine the direction of sound sources [12]. Spatial hearing impairment harms awareness of one’s surroundings, personal safety, and social interaction. Binaural restoration using bilateral cochlear implants (CIs) offers a better quality of hearing than unilateral implantation [13]. The recent advances in sound processing technologies feature elements of spatial rendering for auditory impaired individuals [14]. However, in many cases, acoustic or electric auditory stimulation rehabilitation fails to restore the cues relevant to spatial hearing [5]. Bilateral hearing aids (HAs) should provide interaural level and time cues. Nevertheless, inconsistencies in localization have often been found in patients, suggesting that the benefit, although significant, is usually poorer than predicted [15]. On the other hand, CIs distort ITD cues, so patients must rely on ILD cues for sound localization [16]. Furthermore, since several CI microphones are positioned behind the pinna, most of the amplitude and frequency cues conveyed by the outer ear are lost. The resulting lack of monaural cues deprives infants and children of the timely development of their acoustic localization skills [17].

Restoring normal hearing is critical in children because hearing loss can harm a regular speech and language development. HAs [18] and CIs [19] proved effective in improving speech perception and production. Evolving traditional clinical assessments based on pure tone audiometry [20] is crucial, though. Assessing the benefits of assistive hearing devices during one’s everyday routine tasks can provide future directions for technological developments.

Previous studies demonstrated that children with bilateral cochlear implants are sensitive to ILDs, possibly due to monaural level cues [11]. Children who are deaf from birth have weak or absent sensitivity to ITDs. Conversely, children who could rely on previous listening experience are sensitive to these cues [11]. Another study confirmed that better binaural fusion is associated with an extended hearing experience before cochlear implantation. These observations highlight the importance of enabling auditory perception as an essential component in children’s development [21]. On the other hand, binaural localization in HI children who wear a device during their auditory development is still partially unexplored, and the role of head movement is yet to be understood.

The primary objective of this study was to examine how two pediatric populations—one using bilateral HAs and the other using CIs—localize sound in the horizontal plane. We assessed their spatial hearing abilities based on performance, active listening, and preferred listening position, then compared such two populations with each other and with NH listeners. The second objective was to estimate the potential of bilateral assistive devices to induce binaural sensitivity. We analyzed the effect of deactivating one or both devices by examining the residual localization ability as a function of the sound source position.

The correlation between variables also provided insights into potential localization strategies. Previous research has shown that NH listeners increase their head movements during a localization task with simulated asymmetric hearing loss [22]. We aimed to determine whether this behavior also occurs in listeners using HAs and CIs, either as a response to asymmetric hearing conditions, or rather as a characteristic response of individuals with a consolidated binaural hearing experience. Correlating their head angular positions and movements with localization accuracy gave insights into their residual binaural ability. This helped us better understand the role of head position and movements in HA and CI populations. Furthermore, a correlation analysis between localization abilities and age provided information on one’s ability to consolidate spatial hearing through experience.

The task was performed in a mixed (i.e., real and virtual) environment. Listeners wearing a head-mounted display (HMD) used a virtual laser beam to point to a sound source position reproduced by a loudspeaker array. This design choice was supported by research that found no significant difference in the horizontal localization accuracy of NH listeners in real environments or virtual replicas [23]. Head movements and hand pointing were simultaneously tracked to collect temporal data about head motor activity during the localization task. Based on a setup and methodology already tested on an NH population [24], we assessed whether and how head dynamics and orientation could compensate for adverse listening conditions. We turned off one hearing device causing asymmetric listening, or both devices when possible, thus restoring native listening. This experimental design established a valid procedure for observing the onset of binaural sensitivity.

A preliminary analysis [25] of the results from this procedure has been largely reformulated here. We interpreted head movements in such adverse hearing conditions as an indicator of subjective willingness to restore binaural cues. We expected that children with no motor disorders put diverse compensation strategies into action, depending on their residual localization ability, once their bilateral hearing devices are switched off on either or both ears. We aimed to compare their active listening against NH individuals [26]. We hypothesized bilateral hearing devices helped HI children with spatial hearing, but they failed to evoke ITD cues. The sensitivity to these cues was instead the result of a consolidated hearing experience. More speculatively, classifying HI children’s compensatory strategies may inform future rehabilitation and training for localization, speech-in-noise detection, and spatial sound listening [27]. For example, knowing that ITD sensitivity can be neither found nor enabled in a particular population may lead to rehabilitation programs focusing on monaural localization using adequate acoustic stimuli.

Materials and methods

The study was approved by the Institutional Review Board of the Institute for Maternal and Child Health IRCCS “Burlo Garofolo” (Trieste, Italy) under the project “Ricerca Corrente 17/23”. Informed consent has been obtained for each participant from their parents.

Participants

Twenty-two HI children (13 males and 9 females, mean age μ = 10.45 years, standard deviation σ = 3.13 years) participated in the experiment. Nine were CI listeners, and thirteen were HA listeners. Children were affected by non-syndromic hearing loss (“GEN NO SDR”) in 8 cases (6 GJB2 gene mutations, 2 other gene mutations), syndromic hearing loss (“SDR”) in 4 (2 Usher syndromes, 1 chromosomal instability, 1 Waardenburg syndrome), and 1 enlarged vestibular aqueduct (inner ear malformation, “IEM”). Other causes of hearing loss (“Other”) were congenital cytomegalovirus infection in 2 cases, chemotherapy with platinum derivatives for neuroblastoma in 2, preterm delivery in 1, and prolonged neonatal intensive care unit stay in 1 case. The cause was not identified (“ND”) in 3 cases. The two populations were similar in age (μ = 10.3 years, σ = 3.0 years for the CI listeners and μ = 10.5 years, σ = 3.2 years for the HA listeners), but they differed substantially in terms of interaural experience of their devices (μ = 2.4 years, σ = 2.9 years for the CI listeners and μ = 0.1 years, σ = 0.3 years for the HA listeners). All participants confirmed verbally that they were right-handed and had no diagnosis of motor impairment. Recruitment started on June 17th, 2022, and ended on January 19th, 2023. Anonymized data became accessible to the authors on January 22nd, 2023. Table 1 displays the collected participants’ data.

thumbnail
Table 1. Age, cause of hearing impairment (GEN NO SDR: Non-syndromic hearing loss; SDR: Syndromic hearing loss; IEM: Inner ear malformation; ND: Not identified, other: Other causes), ear devices, and years of experience with the device for each child.

https://doi.org/10.1371/journal.pone.0312073.t001

Setup

The acoustic reproduction system consisted of 13 Seeburg i4 loudspeakers (SEEBURG acoustic line GmbH) driven by a Sonible d:24 multi-channel amplifier (Sonible GmbH). These loudspeakers were arranged to form a semi-circular array with a radius equal to 1.4 m in a small enclosure measuring 3×2.6 m, having a 60 dB-reverberation time of 200 ms. With this array, 13 sound sources radiating from equally-spaced horizontal angles of arrival were reproduced across an egocentric scene spanning between −90° and + 90°, with angles equal to 15° between each pair of adjacent loudspeakers.

A three-dimensional (3D) virtual environment (VE) was developed in the Unity3D programming environment. An Oculus Quest 2 HMD, including the Oculus Touch hand controllers (Meta Platforms Technologies Ireland Limited, Dublin, Ireland), was employed to reproduce the visual scene. The HMD tracks the 3D position of the head with submillimeter precision [28]. The four-valued quaternion representing the orientation of the head in the 3D scene had a precision of ±1° at 20 Hz sampling rate [29]. The data from the HMD were received via the mqtt protocol [30] running on a 2.4 GHz Wi-Fi connection by an mqtt broker as a Docker container (Docker, Inc.). From here, data were sent to a custom client app, allowing the experimenter to monitor head tracking and hand pointing and to check whether the connection between the Oculus and the computer was constantly up and running. A Max (Cycling ’74) real-time sound synthesis patch running on the same computer was used to reproduce the auditory scene at runtime. Sounds with 16-bit resolution at 44.1 kHz sampling frequency were sent to the loudspeakers by a MADIface USB 2.0 Audio Interface (RME GmbH).

A seat whose height could be adjusted by the experimenter was placed on the focal point of the semi-circular array so that the head was distant 1.4 m from the speakers at elevation zero. The experimenter aligned each virtual source to the corresponding loudspeaker by aiming at each speaker from the seat and then reading the angle displayed by the client app. This angle defined the target angle. The angular resolution of the pointer was set to 1 degree, based on the Oculus controller accuracy rotation found in the literature [31].

Stimuli

The acoustic stimulus consisted of pulsated pink noise, with each burst lasting 200 ms and completed with a 100 ms linear onset and a 100 ms linear decay. Adjacent noise bursts were separated by 200 ms of silence. This acoustic stimulus has been selected because normal listeners localize it easily; it has a rich broadband spectrum [32]. Furthermore, it provides a periodic temporal envelope enabling listeners to capture the binaural cues relevant to spatial hearing. The pulsated stimulus lasted until a participant produced a response. It was presented at a sound pressure level (SPL) equal to 65 ± 1 dB, measured with a calibrated meter (XL2 Sound Level Meter, NTi Audio). SPL was measured during setup by aligning the meter to the experimenter’s external ear while he was seated in the test focal point. The measurement was repeated for each speaker on both ears.

Procedure

Before each session, the seat’s height was adjusted to align the loudspeakers at the participant’s ear level. Then, participants were instructed to use one controller with their dominant hand to point toward the sound source position. At this point, they were invited to sit and wear the HMD. The interpupillary distance was adjusted, and the real and virtual worlds were aligned in the HMD through a spatial calibration procedure. Calibration was performed by asking each participant to point to specific visual markers occupying a playground until the system recorded every hit to a marker to mismatch with the corresponding loudspeaker by less than 1°. The resulting calibration was loaded through the Oculus Guardian system.

During the task, participants were immersed in a VE consisting of a homogeneous landscape free of any absolute azimuth reference (see Fig 1). They were listening to sounds coming from the loudspeakers and were in a condition to point to a guessed sound source position with the controller in their hand. A beam was displayed to give participants visual feedback about the pointing direction. Participants were not instructed to respond as soon as they heard a sound nor informed that the stimuli came only from the frontal hemifield.

thumbnail
Fig 1. The 3D scene with the beam pointing to a guessed sound source position.

https://doi.org/10.1371/journal.pone.0312073.g001

Before a test session started, participants completed a brief training session of five trials. They were invited to take the time they needed to produce a response. Each trial began when one loudspeaker started to reproduce a stimulus; concurrently, the system started to track the participant’s head movement. It finished when a participant pulled the Oculus Touch trigger. At this moment, the acoustic stimulus stopped. The system recorded the guessed angular position and the trajectory of the participant’s head during the trial. After pausing for one second in silence, a new trial began.

Before a test session started, participants completed a brief training session of five trials. They were invited to take the time they needed to produce a response. Each trial began when one loudspeaker started to reproduce a stimulus; concurrently, the system started to track the participant’s head movement. It finished when a participant pulled the Oculus Touch trigger. At this moment, the acoustic stimulus stopped; the system recorded the guessed angular position and the trajectory of the participant’s head during the trial. After pausing for one second without any acoustic stimuli, a new trial began.

Conditions

Each session included multiple test conditions: up to four for the HA listeners and three for the CI listeners. In each condition, the stimulus was presented 5 times from each loudspeaker position across a sequence of 13 × 5 = 65 trials. The sequence was constrained to equate the number of target position shifts alternating leftward and rightward. Every participant received a randomly rotated version of this constrained sequence in such a way that our listeners completed a random module of a series of angular arcs in a test session. To this end, the angular position forming the sequence tail was pasted to the sequence head, and later, the first trial was removed from the analysis. We deliberately chose not to reset the participant’s head to a starting position after each trial since the literature reports that interruptions decrease performance in individuals who are cognitively engaged in a demanding task [33]. Considering the age of our population we favored engagement, by letting participants keep focus on the pointing task.

Each participant performed the task first with both devices turned on (“ON”), then with one device turned on and one off (“L” for the left device turned on, or “R” for the right device turned on, respectively) by randomly starting with either the left or right ear, and finally with both devices turned off (“NO”). The ON condition was presented first during each test session because it provided an everyday listening context to which participants were accustomed and confident. The NO condition was omitted if a patient’s pure tone average threshold in the frequency range [0.5-4] kHz was above the stimulus level used for the test. For this reason, only nine HA listeners completed the NO condition. Five HA listeners could not complete the L and R conditions either since their session had to be stopped immediately as they reported annoyance or fatigue to the experimenter. In the end, four HA listeners completed the whole session in all four conditions; four completed only the ON, L, and R conditions, and five completed only the ON and NO conditions. Conversely, CI listeners completed the ON, L, and R conditions. We analyzed only sessions including the complete set of 65 trials, except seven sessions that were completed by CI listeners (two in both the R and L conditions, one in the R condition, one in the L condition, and one in the ON condition), each missing one trial that was not recorded due to a technical problem.

Data analysis

From the 3D array of the positions and the 4D quaternion, we computed the difference between the target angle and head orientation angle when a response was produced (head rotation) and the total distance covered by the head during each trial (head distance). The latter has already been used to measure head dynamism in studies examining spontaneous actions during music listening [34]. The signed error was computed as the difference between the target and the pointed angle [35]. From it, we computed the unsigned error as the absolute value of the signed error. While the signed error indicated angular bias across repeated trials (e.g., the tendency to shift leftward or rightward), the unsigned error quantified overall accuracy. In the following, we will name the signed error as bias and the unsigned error as accuracy. Although the unsigned error is continuous by nature, its measurement to the sexagesimal degree had to be analyzed with data bins 1 degree apart from each other due to the precision of the Oculus.

The experiment had a mixed design, where the factors were the two populations with HAs and CIs and conditions, four for the former population and three for the latter. No participant with CIs had a hearing threshold sufficient for attending the NO condition; hence, seven subsets were analyzed. As mentioned before, not all participants performed the test in every condition. The mean μ and standard deviation σ of the unsigned error were computed for each subset. Trials resulting in unsigned errors larger than three standard deviations above the mean deviation per target angle were considered outliers and removed from the respective subset [36]. Exclusion of the outliers is a common procedure in sound localization studies [23, 35]. In our case, the outliers were finally 79 out of 4218 trials, i.e., 1.87%. A wide variance in the data was noted during the training sessions, especially in terms of localization accuracy by the CI listeners. As in other studies [16], this variance was mapped on a logarithmic scale, in such a way as to de-emphasize larger variations. The median of each participant’s five repeated measurements was computed for each loudspeaker position. It is reasonable to assume that adult listeners with NH responses are normally distributed because they are likely stable and consistent across positions [37]. The localization of a specific loudspeaker position in children who experienced auditory deprivation before being bilaterally implanted is less likely to follow a normal distribution [38]. Since our subsets often violated the assumption of a normal distribution of their residuals, linear mixed-effects models were not chosen for the analysis.

The analysis first compared each experimental variable of the two populations in the ON condition. Then, the other conditions were analyzed separately for each population. The data were hierarchical; they were aggregated via the median of all positions per participant and condition; moreover, they were divided by position and compared by condition. A Shapiro-Wilk test was performed to check if data residuals followed a normal distribution [39]. Given the limited sample size in some cases, the normality of the residuals was also tested with the Anderson-Darling test [40]; we did not report the results of the latter test because they rarely disagreed with the Shapiro-Wilk test results (14 cases over 504); when they did, we did not find statistical significance with any test of the subsequent analysis. If the samples came from the same population and the normality assumption was violated, a Friedman test [41] was performed, followed by a post-hoc Nemenyi test. The test employs the critical difference (CD) statistics and, according to Nemenyi [42], was developed to account for a family-wise error, hence being already a conservative test. For this reason, we did not apply p-adjustments. If the residuals from the same population followed a normal distribution, sphericity was checked with a Mauchly test [43]. If the sphericity hypothesis was met, we checked the equality of the means through an RM-ANOVA and performed a post-hoc analysis with a t-test; if sphericity was violated, the same tests were performed, but with a Greenhouse-Geisser correction of the p-value. If the samples came from two populations in the ON conditions, we checked the normality with a Shapiro-Wilk test and the homoscedasticity with a Levene test [44]. If the normality and homoscedasticity assumptions were met, we compared them with a t-test. If the latter was violated, a Welch t-test was employed. Otherwise, if evidence of a violation of normal distribution was found, a Mann-Whitney U test was used to compare the distribution with residuals with similar variances; diversely, a Yuen test [45] was used. The effect size was reported for every test; the partial η2 was computed for the repeated measures ANOVA and the Hedges’ g for the pairwise tests. An effect size (named W) was estimated as in Tomczak [46] for the non-parametric Friedman test. Hedges’ g was computed from Cohen’s d using an average variance if the samples came from the same population. A correlation analysis for each condition’s data was performed using Spearman’s ρ correlation coefficient since we did not assume a normal distribution of the variables. All tests were two-tailed. A Bonferroni correction was applied for every multiple comparison, except for the Nemenyi test. Statistical significance was set at α = 0.05. The analysis was made using the Python packages Pingouin [47] and Scipy [48] and the data visualization libraries Seaborn [49] and Matplotlib [50].

Results

Medians of signed error, unsigned error, head rotation, and distance are graphically summarized in Figs 2 and 3 for each test condition, aggregated by target and participant. In the ON condition, the CI listeners’ median of the unsigned error was significantly worse than that of the HA listeners. The CI listeners’ medians of the signed error were the farthest from zero in the respective conditions. The signed error in the asymmetrical hearing conditions exhibited the largest variance. The second largest variance was found in the HA R condition, where the median was the third farthest from zero. The medians of the signed error in listeners with both HAs turned on and off were comparable to the mean performance of young NH listeners in the same mixed environment [24] (μ = 0.60°, σ = 5.26). The median of the unsigned error in HA listeners with both devices turned on was similar to the average performance of NH listeners [24] (μ = 4.15°, σ = 3.28); CI listeners did not fall within this range even in the CI ON condition. The asymmetric hearing condition increased the unsigned error of the former population by more than four times, while it increased for the latter population by three times. The accuracy in the HA NO condition was the second-best.

thumbnail
Fig 2. Medians across conditions of signed and unsigned errors.

https://doi.org/10.1371/journal.pone.0312073.g002

thumbnail
Fig 3. Medians across conditions of head rotation and head distance.

https://doi.org/10.1371/journal.pone.0312073.g003

Fig 3 shows that asymmetrical hearing leads to the highest head rotation variances. HA listeners had the head rotation median closest to zero when both devices were turned on. Compared to HA listeners, the head distance covered by CI listeners in every condition had larger variances and medians. Moreover, even if the CI ON condition had a smaller median, the CI listeners’ head distance variances were similar. The HA ON condition exhibited the lowest head dynamism. Compared to NH listeners [24] (μ = 0.13 m), HA listeners (μ = 0.22 m) and CI listeners (μ = 0.28 m) increased head movement.

The statistical analysis first considered the differences between CI and HA populations in the bilateral listening condition and then the differences between conditions for each population separately. The results with statistical significance are presented in Table 2. The statistical analysis supported these observations: the accuracies of the two populations in the ON condition were different, and similarly were the head distances. Within each population, the ON condition led to significantly better accuracy than the asymmetric conditions, except for the HA R condition, which was not found to be statistically different.

thumbnail
Table 2. Comparison among medians of the variables across condition ON, conditions factorized to CI listeners and HA listeners.

Only statistically significant results are illustrated.

https://doi.org/10.1371/journal.pone.0312073.t002

The data of the two populations across loudspeaker positions in the ON condition are displayed in Fig 4, and the results having statistical significance are presented in Table 3. For eight targets out of thirteen, the HA listeners’ signed error variances were smaller than the CI listeners’, but only at −60° were the medians significantly different. The HA listeners’ unsigned error minima, medians, and maxima were always smaller than the corresponding CI listeners’ ones, except for the 30° and 90° maxima. The difference between the unsigned error’s medians of the two populations was always statistically significant, except for ±90°. The peak of the unsigned error of the CI listeners occurred at 15°, while the HA listeners’ one was located at 90°.

thumbnail
Fig 4. Signed errors, unsigned errors, head rotations, and distances of CI and HA listeners in the ON condition across positions.

The data are normalized. Filled circles represent the HA ON medians, empty circles represent the HA ON minima and maxima. Squares represent the CI ON medians, and diamonds represent the CI ON minima and maxima.

https://doi.org/10.1371/journal.pone.0312073.g004

thumbnail
Table 3. Statistical results of the variables of CI and HA listeners in the ON condition across positions.

Only statistically significant results are illustrated.

https://doi.org/10.1371/journal.pone.0312073.t003

The HA listeners’ head distance medians were always smaller than the CI listeners’ ones, but only for three target positions were they significantly different.

The data divided by population in each condition are illustrated in Figs 5 and 6. The statistically significant results are shown in Tables 4 and 5. The asymmetrical hearing conditions worsened CI listeners’ accuracy at every target position, except for ±15°. CI R’s accuracy was significantly worse than CI ON’s everywhere, except for ±15°, 45°, 75°, and 90°. CI L’s accuracy was significantly worse than CI ON’s at −90°, 0°, and every target position in the right hemifield, except for 15°.

thumbnail
Fig 5. Signed errors, unsigned errors, head rotations, and distances of CI listeners in every condition across positions.

The data are normalized. Circles represent the medians, triangles pointing down represent maxima, and triangles pointing up represent minima. Squares represent the CI ON medians, and diamonds represent the CI ON minima and maxima. Pluses represent the CI L medians, and xs represent the CI L minima and maxima. Hexagons represent the CI R medians. When rotated, they represent the CI R minima and maxima.

https://doi.org/10.1371/journal.pone.0312073.g005

thumbnail
Fig 6. Signed errors, unsigned errors, head rotations, and distances of HA listeners in every condition across positions.

The data are normalized. Filled circles represent the HA ON medians, and empty circles represent the HA ON minima and maxima. Pluses represent the HA L medians, and xs represent the HA L minima and maxima. Hexagons represent the HA R medians. When rotated, they represent the HA R minima and maxima. Triangles pointing up represent the HA NO medians, and triangles pointing up represent the HA NO minima and maxima.

https://doi.org/10.1371/journal.pone.0312073.g006

thumbnail
Table 4. Statistical results of the variables of CI listeners in every condition across positions.

Only statistically significant results are illustrated.

https://doi.org/10.1371/journal.pone.0312073.t004

thumbnail
Table 5. Statistical results of the variables of HA listeners in every condition across positions.

Only statistically significant results are illustrated.

https://doi.org/10.1371/journal.pone.0312073.t005

The asymmetric hearing led to larger unsigned error medians and maxima for the HA listeners; a significant difference was found only at −15° between the HA R and HA ON conditions. The HA listeners’ head distance maxima in asymmetrical hearing conditions were not always the largest, and medians were never significantly different.

The correlations between the experimental variables and individual age in every condition can be inspected from the heat maps in Figs 7 and 8.

thumbnail
Fig 7. Heat maps reporting the correlations occurring in each condition of the CI listeners.

Each asterisk indicates the statistical significance of the correlation. UE stands for unsigned error, HD for head distance, SE for signed error, and HR for head rotation.

https://doi.org/10.1371/journal.pone.0312073.g007

thumbnail
Fig 8. Heat maps reporting the correlations occurring in each condition of the HA listeners.

Each asterisk indicates the statistical significance of the correlation. UE stands for unsigned error, HD for head distance, SE for signed error, and HR for head rotation.

https://doi.org/10.1371/journal.pone.0312073.g008

When both CIs were turned on, the head distance correlated positively with the unsigned error. When only one CI was turned on, the head rotation and the signed error correlated with the unsigned error, positively when the left CI was turned off, and negatively when the right CI was turned off. In the asymmetrical conditions, the head distance correlated with the signed error positively when the right CI was turned off, and negatively when the left CI was turned off.

Considering the correlations in HA listeners, age correlated positively with accuracy in every condition, except when the left HA was turned off. Also for the HA listeners, head distance correlated positively with the unsigned error when both devices were turned on. Finally, age correlated negatively with the head distance when both devices were turned on, and positively in both the asymmetrical conditions.

Discussion

Here, we summarize the main findings that emerged from the results:

  • CI listeners did not process available localization cues as effectively as HA listeners did. With bilateral devices on, CI children performed worse than HA children across medians and overall, except for the most eccentric target locations.
  • CI children were more dynamic than HA children with bilateral devices on. CI children’s head movements presented a large variance in every condition, indicating uncertainty and difficulty in localization. The correlation analysis suggested active listening did not improve CI children’s localization. Children with HAs exploited their residual and previous binaural ability in each hearing condition. Asymmetric hearing conditions impaired the performance of children with CIs much more than children with HAs.
  • Children with HAs exploited their residual and previous binaural ability in each hearing condition. Asymmetric hearing conditions impaired the performance of children with CIs much more than that of children with HAs.
  • CI children’s localization strategy was based on intensity cues. In asymmetric hearing conditions, CI listeners aimed to maximize the intensity at the aided ear; head rotation correlated with the accuracy, showing a clear tendency to turn that ear towards the sound source. Conversely, when the target was in the hemifield of the assisted ear, symmetric and monolateral conditions did not result in significantly different accuracy.
  • Age correlated positively with accuracy in children with HAs in three hearing conditions. This correlation suggested that HA children acquired spatial hearing abilities through binaural experiences. Asymmetric hearing conditions affected head dynamism in older HA individuals; this sub-group, hence, likely made proficient use of active listening.

Bilateral hearing: CI head movements and HA binaural skills

The bias and accuracy medians represented by the signed and unsigned error boxplots in Fig 2 ranked as expected among populations [51]. CI listeners’ accuracy with both CIs turned on (median: 18.4°) was consistent with previous research, where absolute azimuth errors in the frontal space did not exceed 39.4° [38, 52]. Nevertheless, our tests uncovered higher variances and numbers of outliers, probably because no visual cues were available to support localization, unlike previous reports that used a touch screen for response validation, and in which loudspeakers were visible [38, 52]. CI listeners’ accuracy was significantly worse than HA listeners’, both medians and across positions; the difference was not statistically significant at the most eccentric positions, where binaural ability was less necessary. Moreover, the CI listeners’ unsigned error peak was reached in correspondence with a central target position, i.e., 15°, where the effect size of the difference with the HA listeners’ unsigned error was the highest (g = 1.78). These results confirmed the poor ITD sensitivity of CI listeners [53, 54], probably due to two main reasons: the absence of auditory experience during an early critical period [55], and the pulse rate at which CIs operate [56]. CI processors generally run at fixed rates between 900 and 3700 pulses per second (pps) [57]; faster temporal sampling of speech envelopes might improve speech recognition in CI users. Nevertheless, even 600 pps could be “too fast for ITD,” given the poor performance of CI listeners at rates of 300 pps or above [56].

An inspection of bias and accuracy revealed that the signed and unsigned errors in HA listeners performing localization with both devices turned off were close to the ones made with the devices turned on: the medians of the accuracy were 11.4° and 6.0°, respectively. The benefit of HAs to spatial hearing was found to be significant in old adults, even if poorer than predicted [15], and was confirmed in children. However, it cannot be generalized as not being statistically significant. The median of the angular bias improved when both devices were turned off (−1.0°) rather than on (2.0°), even if not significantly. This result has already been found in some studies [58]. It is interpretable as a residual binaural sensitivity of HA listeners, who can achieve good localization performance without device support. The importance of a prelingual binaural experience is supported by the strong correlation between age and accuracy in HA listeners when both their devices were turned on or both are turned off. Older listeners consolidated a longer experience, and relied on it for spatial hearing; in the HA ON condition, these individuals did not rely on dynamic cues produced by head movements, as the negative correlation between the head distance and the age suggested.

Concerning motor activity during the task in the ON condition, the results in Table 2 about head distances show that CI listeners were especially active; the head distance covered by HA listeners was significantly smaller. The positive correlation between head distance and unsigned error in the CI ON, HA ON, and HA NO conditions can be interpreted as a sign of uncertainty in localization [59]. Mueller et al. [60] found that head movements disturbed localization, yet they instructed their participants to move or, conversely, keep their heads still before attending to specific listening conditions. Head movements were spontaneous during our task and may have been amplified when a target proved difficult to localize.

Asymmetrical hearing: CI intensity maximization and HA active search compensatory strategies

The accuracy and angular bias of CI listeners were significantly impacted by asymmetric hearing. The former decreased significantly, while the variance of the latter soared when one CI was turned off. Both were indicators of uncertainty and difficulty in localization. Unsigned errors increased at every target location except for ±15°. Accuracy did not significantly deteriorate for a target positioned in the aided ear’s hemifield if the CI was turned off on the other side. These findings suggested that CI children’s localization was mostly based on intensity [16]; their strategies aimed at maximizing intensity in the aided ear. This hypothesis was supported by the significant correlation between head rotation and accuracy. When the assisted ear pointed towards the source, the intensity was at its maximum, and the best performances were obtained. As shown in Fig 2, angular bias leaned toward the aided ear in CI asymmetrical conditions. CI children’s head distance seemed to mitigate the asymmetrical intensity. Moreover, it correlated positively with the head rotation when only the left CI was turned on, indicating a search for the intensity peak at the left ear.

Even if the signed and unsigned error variances increased in both asymmetrical conditions, the effect of asymmetrical hearing on the accuracy of HA listeners was way more limited. The post-hoc analysis found significant differences only at 15° between the bilateral hearing condition and the condition with the left HA turned off, and between the medians of the former condition and the one with the right HA turned off. Again, these two findings suggested that having just one device turned on created confusion and increased difficulty in localization. HA listeners compensated for the adverse condition more efficiently; the difference in accuracy was found at an almost frontal target position. Here, binaural differences are more subtle, requiring greater sensitivity. Unlike CI listeners, HA children’s data in asymmetrical conditions did not present a correlation between head rotation and unsigned error. This indicated that they probably used residual binaurality for spatial hearing.

Fig 3 shows a general tendency for HA children to orient the right ear towards the source in every condition. An advantage of the right ear in auditory processing has been firmly established in decades of behavioral, electrophysiological, and neuroimaging research [61]. Even the correlation between head rotation and unsigned error in the condition with both HAs turned on can be read as an attitude to favor right-ear intensity maximization.

Bias and head rotation were positively correlated when only one device, CI or HA, was turned on, indicating a tendency to couple the pointing gesture with head rotation under asymmetric listening conditions. Our result extends the training-induced observed behavior of the NH [62] to the HI.

Previous research found no relationship between head dynamism and age in children with CIs [5], and our results support this. Children with HAs behaved differently. Head dynamism correlated negatively with age in the ON condition. HA listeners who developed binaural sensitivity behaved like the NH population, as they did not need to rely on dynamic cues elicited by head movement. They did so in asymmetric hearing conditions. Older HA listeners produced more pronounced head dynamism, as the positive correlation between head distance and age showed. They faced localization cue disruptions with dynamical changes in binaural cues, enabling a more reliable response in difficult hearing conditions [63]. The negative correlation between unsigned error and age confirmed that they were also the most successful in localization, at least when the right HA was turned off. Ultimately, the correlation with age suggests that active listening is refined throughout life [5].

These findings could be exploited when planning specific interventions for diverse HI pediatric populations. Children with HA must be assisted in acquiring binaural skills, not necessarily by insisting on head movement or unilateral maximization of intensity, unless they are in adverse hearing situations like speech-in-noise. The bilateral perception of children with CI should be trained, and vice versa, by insisting on active search motor behavior that includes unilateral training of even the weaker hemifield.

Spatial hearing investigations ask clinical research to take everyday listening into deeper consideration. The current study highlighted how crucial it is to support more ecological scenarios so that active listening can be taken into account when assessing hearing ability. An open research question is whether our results would be confirmed after the substitution of noise bursts with the most informative sound messages we are exposed to during the day, that is, speech. Although pink noise is a standard stimulus in the literature [24], Neuman et al. [51] did not find differences in localization accuracy of speech and pink noise sources, suggesting that the salient ILD cues are preserved by vocal messages. However, the processing algorithms that an auditory device applies to pink noise in terms of signal compression, noise reduction, and directional sensitivity remain generally unknown outside the manufacturing company. At any rate, we remain non-committal about the effectiveness of an accurate rendering of ITDs with CI because individuals who have not accumulated enough experience with these cues may not be responsive as expected.

Conclusion

This study examined how children rehabilitated with bilateral hearing aids and bilateral cochlear implants localized sounds in the anterior horizontal field with concealed visual cues. Head movement and orientation were instrumental for spatial hearing, with different roles for the two populations. Asymmetrical hearing causes the largest errors, particularly for cochlear implant users. Children with bilateral cochlear implants showed more active listening than children with bilateral hearing aids; nevertheless, their activity revealed uncertainty rather than configuring as an additional resource. In the latter population, a significant correlation was found between age and localization accuracy and between age and head movement. Listeners with hearing aids and a longer binaural hearing experience actively searched the sound source to face the disrupted binaural cues introduced by asymmetric hearing conditions. The dominance of intensity cues was confirmed for the population with cochlear implants.

Hints regarding strategies based on level maximization at the better hearing ear have been found in asymmetric listening conditions. The different reactions of the two populations to the adverse conditions introduced by asymmetric hearing were analyzed. They revealed that children with hearing aids can rely on richer binaural cues, localize through dynamic information, and sharpen this ability over time. A quantitative analysis of active listening may pave the way for new methodologies in auditory localization studies. They could objectively characterize the listeners’ spatial listening strategies based on their motor behavior in an ecological acoustic environment. Furthermore, based on the positioning of the head-mounted devices and their orientation angle when the target was hit, the data acquired during a test session might be used to train dynamic and adaptive algorithms enhancing the directionality of cochlear implants and hearing aids.

Acknowledgments

We thank all the participants who volunteered in this study. We sincerely thank Laser Industries SRL (Treviso, Italy) for supporting the early setup of “La stanza di Matilde” and Niccolò Granieri helping gather the participants’ data. The authors have no conflicts of interest to declare.

References

  1. 1. Kidd G. Jr, Arbogast T. L., Mason C. R., & Gallun F. J. (2005). The advantage of knowing where to listen. The Journal of the Acoustical Society of America, 118(6), 3804–3815. http://dx.doi.org/10.1121/1.2109187. pmid:16419825
  2. 2. Rayleigh L. (1907). XII. On our perception of sound direction. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 13(74), 214–232. http://dx.doi.org/10.1080/14786440709463595.
  3. 3. Blauert J. (1996). Spatial hearing: The psychophysics of human sound localization. The MIT Press. http://dx.doi.org/10.7551/mitpress/6391.001.0001.
  4. 4. Pastore M. T., Natale S. J., Yost W. A., & Dorman M. F. (2018). Head movements allow listeners bilaterally implanted with cochlear implants to resolve front-back confusions. Ear and Hearing, 39(6), 1224–1231. http://dx.doi.org/10.1097/AUD.0000000000000581. pmid:29664750
  5. 5. Coudert A., Gaveau V., Gatel J., Verdelet G., Salemme R., Farnè A., et al. (2021). Spatial hearing difficulties in reaching space in bilateral cochlear implant children improve with head movements. Ear and Hearing, 43(1), 192–205. http://dx.doi.org/10.1097/AUD.0000000000001090.
  6. 6. Paredes-Gallardo A., Innes-Brown H., Madsen S. M. K., Dau T., & Marozeau J. (2018). Auditory stream segregation and selective attention for cochlear implant listeners: Evidence from behavioral measures and event-related potentials. Frontiers in Neuroscience, 12. http://dx.doi.org/10.3389/fnins.2018.00581. pmid:30186105
  7. 7. Suzuki, Y. (2010). Auditory displays and microphone arrays for active listening. In Keynote lecture, 40th International Audio Engineering Society Conference.
  8. 8. Nojima R., Morimoto M., Sato H., & Sato H. (2013). Do spontaneous head movements occur during sound localization? Acoustical Science and Technology, 34(4), 292–295. http://dx.doi.org/10.1250/ast.34.292.
  9. 9. Morikawa D., Toyoda Y., & Hirahara T. (2013). Head movement during horizontal and median sound localization experiments in which head-rotation is allowed. The Journal of the Acoustical Society of America, 133(5_Supplement), 3510–3510. http://dx.doi.org/10.1121/1.4806262.
  10. 10. Anderson S. R., Burg E., Suveg L., & Litovsky R. Y. (2024). Review of binaural processing with asymmetrical hearing outcomes in patients with bilateral cochlear implants. Trends in Hearing, 28. http://dx.doi.org/10.1177/23312165241229880. pmid:38545645
  11. 11. Ehlers E., Goupell M. J., Zheng Y., Godar S. P., & Litovsky R. Y. (2017). Binaural sensitivity in children who use bilateral cochlear implants. The Journal of the Acoustical Society of America, 141(6), 4264–4277. http://dx.doi.org/10.1121/1.4983824. pmid:28618809
  12. 12. Noble W., Byrne D., & Lepage B. (1994). Effects on sound localization of configuration and type of hearing impairment. The Journal of the Acoustical Society of America, 95(2), 992–1005. http://dx.doi.org/10.1121/1.408404. pmid:8132913
  13. 13. van Hoesel R. J. M. (2004). Exploring the benefits of bilateral cochlear implants. Audiology and Neurotology, 9(4), 234–246. http://dx.doi.org/10.1159/000078393. pmid:15205551
  14. 14. van Hoesel R. J. M., & Tyler R. S. (2003). Speech perception, localization, and lateralization with bilateral cochlear implants. The Journal of the Acoustical Society of America, 113(3), 1617–1630. http://dx.doi.org/10.1121/1.1539520. pmid:12656396
  15. 15. Ahlstrom J. B., Horwitz A. R., & Dubno J. R. (2009). Spatial benefit of bilateral hearing aids. Ear and Hearing, 30(2), 203–218. http://dx.doi.org/10.1097/AUD.0b013e31819769c1. pmid:19194292
  16. 16. Seeber B. U., & Fastl H. (2008). Localization cues with bilateral cochlear implants. The Journal of the Acoustical Society of America, 123(2), 1030–1042. http://dx.doi.org/10.1121/1.2821965. pmid:18247905
  17. 17. Dorman M. F., Loiselle L. H., Cook S. J., Yost W. A., & Gifford R. H. (2016). Sound source localization by normal-hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiology and Neurotology, 21(3), 127–131. http://dx.doi.org/10.1159/000444740. pmid:27077663
  18. 18. Tomblin J. B., Oleson J. J., Ambrose S. E., Walker E., & Moeller M. P. (2014). The influence of hearing aids on the speech and language development of children with hearing loss. JAMA Otolaryngology–Head & Neck Surgery, 140(5), 403. http://dx.doi.org/10.1001/jamaoto.2014.267. pmid:24700303
  19. 19. Sharma S. D., Cushing S. L., Papsin B. C., & Gordon K. A. (2020). Hearing and Speech Benefits of Cochlear Implantation in Children: A Review of the Literature. International Journal of Pediatric Otorhinolaryngology, 133, 109984. http://dx.doi.org/10.1016/j.ijporl.2020.109984. pmid:32203759
  20. 20. Mehta Z. (2003). Limitations of Pure-Tone Audiometry in the Detection of Nonorganic Hearing Loss: A Case Study. Contemporary Issues in Communication Science and Disorders, 30(Spring), 59–69. http://dx.doi.org/10.1044/cicsd_30_S_59.
  21. 21. Steel M. M., Papsin B. C., & Gordon K. A. (2015). Binaural Fusion and Listening Effort in Children Who Use Bilateral Cochlear Implants: A Psychoacoustic and Pupillometric Study. PLOS ONE, 10(2), e0117611. http://dx.doi.org/10.1371/journal.pone.0117611. pmid:25668423
  22. 22. Valzolgher C., Capra S., Sum K., Finos L., Pavani F., & Picinali L. (2024). Spatial Hearing Training in Virtual Reality with Simulated Asymmetric Hearing Loss. Scientific Reports, 14(1). http://dx.doi.org/10.1038/s41598-024-51892-0. pmid:38291126
  23. 23. Ahrens A., Lund K. D., Marschall M., & Dau T. (2019). Sound Source Localization with Varying Amount of Visual Information in Virtual Reality. PLOS ONE, 14(3), e0214603. http://dx.doi.org/10.1371/journal.pone.0214603. pmid:30925174
  24. 24. Gulli A., Fontana F., Orzan E., Aruffo A., & Muzzi E. (2022). Spontaneous Head Movements Support Accurate Horizontal Auditory Localization in a Virtual Visual Environment. PLOS ONE, 17(12), e0278705. http://dx.doi.org/10.1371/journal.pone.0278705. pmid:36473012
  25. 25. Gulli, A., Fontana, F., Orzan, E., Aruffo, A., & Muzzi, E. (2023). Active sound source localization in bilateral hearing-impaired children. Preprint available at https://www.researchsquare.com/article/rs-3032496/v1.
  26. 26. Valzolgher C., Capra S., Gessa E., Rosi T., Giovanelli E., & Pavani F. (2024). Sound Localization in Noisy Contexts: Performance, Metacognitive Evaluations and Head Movements. Cognitive Research: Principles and Implications, 9(1). http://dx.doi.org/10.1186/s41235-023-00530-w. pmid:38191869
  27. 27. Coudert A., Verdelet G., Reilly K. T., Truy E., & Gaveau V. (2022). Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear and Hearing, 44(1), 61–76. http://dx.doi.org/10.1097/AUD.0000000000001256. pmid:35943235
  28. 28. Holzwarth, V., Gisler, J., Hirt, C., & Kunz, A. (2021). Comparing the Accuracy and Precision of SteamVR Tracking 2.0 and Oculus Quest 2 in a Room Scale Setup. In Proceedings of the 2021 5th International Conference on Virtual and Augmented Reality Simulations (pp. 42–46). Association for Computing Machinery. https://doi.org/10.1145/3463914.3463921.
  29. 29. Carnevale A., Mannocchi I., Sassi M. S. H., Carli M., De Luca G. D., Longo U. G., et al. (2022). Virtual Reality for Shoulder Rehabilitation: Accuracy Evaluation of Oculus Quest 2. Sensors, 22(15), 5511. http://dx.doi.org/10.3390/s22155511. pmid:35898015
  30. 30. Hunkeler, U., Truong, H. L., & Stanford-Clark, A. (2008). MQTT-S—A Publish/Subscribe Protocol for Wireless Sensor Networks. In 2008 3rd International Conference on Communication Systems Software and Middleware and Workshops (COMSWARE’08) (pp. 791-798). IEEE. http://dx.doi.org/10.1109/COMSWA.2008.4554519.
  31. 31. Jost T. A., Nelson B., & Rylander J. (2019). Quantitative Analysis of the Oculus Rift S in Controlled Movement. Disability and Rehabilitation: Assistive Technology, 16(6), 632–636. http://dx.doi.org/10.1080/17483107.2019.1688398. pmid:31726896
  32. 32. Yost W. A., & Zhong X. (2014). Sound Source Localization Identification Accuracy: Bandwidth Dependencies. The Journal of the Acoustical Society of America, 136(5), 2737–2746. http://dx.doi.org/10.1121/1.4898045. pmid:25373973
  33. 33. McFarlane D. C., & Latorella K. A. (2002). The Scope and Importance of Human Interruption in Human-Computer Interaction Design. Human–Computer Interaction, 17(1), 1–61. http://dx.doi.org/10.1207/S15327051HCI1701_1.
  34. 34. Van Dyck E., Moelants D., Demey M., Deweppe A., Coussement P., & Leman M. (2012). The Impact of the Bass Drum on Human Dance Movement. Music Perception, 30(4), 349–359. http://dx.doi.org/10.1525/mp.2013.30.4.349.
  35. 35. Tabry V., Zatorre R. J., & Voss P. (2013). The Influence of Vision on Sound Localization Abilities in Both the Horizontal and Vertical Planes. Frontiers in Psychology, 4. http://dx.doi.org/10.3389/fpsyg.2013.00932. pmid:24376430
  36. 36. Jones P. R. (2019). A Note on Detecting Statistical Outliers in Psychophysical Data. Attention, Perception, & Psychophysics, 81(5), 1189–1196. http://dx.doi.org/10.3758/s13414-019-01726-3. pmid:31089976
  37. 37. Hartmann W. M., Rakerd B., & Gaalaas J. B. (1998). On the Source-Identification Method. The Journal of the Acoustical Society of America, 104(6), 3546–3557. http://dx.doi.org/10.1121/1.423936. pmid:9857513
  38. 38. Zheng Y., Godar S. P., & Litovsky R. Y. (2015). Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants. PLOS ONE, 10(8), e0135790. http://dx.doi.org/10.1371/journal.pone.0135790. pmid:26288142
  39. 39. Gupta A., Mishra P., Pandey C. M., Singh U., Sahu C., & Keshri A. (2019). Descriptive Statistics and Normality Tests for Statistical Data. Annals of Cardiac Anaesthesia, 22(1), 67. http://dx.doi.org/10.4103/aca.ACA_157_18. pmid:30648682
  40. 40. Anderson T. W., & Darling D. A. (1952). Asymptotic Theory of Certain “Goodness of Fit” Criteria Based on Stochastic Processes. The Annals of Mathematical Statistics, 23(2), 193–212. http://dx.doi.org/10.1214/aoms/1177729437.
  41. 41. Friedman M. (1937). The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance. Journal of the American Statistical Association, 32(200), 675–701. http://dx.doi.org/10.1080/01621459.1937.10503522.
  42. 42. Nemenyi, P. B. (1963). Distribution-Free Multiple Comparisons. (Ph.D. Thesis). Princeton University. https://www.proquest.com/docview/302256074.
  43. 43. Mauchly J. W. (1940). Significance Test for Sphericity of a Normal n-Variate Distribution. The Annals of Mathematical Statistics, 11(2), 204–209. http://dx.doi.org/10.1214/aoms/1177731915.
  44. 44. Brown M. B., & Forsythe A. B. (1974). Robust Tests for the Equality of Variances. Journal of the American Statistical Association, 69(346), 364–367. http://dx.doi.org/10.1080/01621459.1974.10482955.
  45. 45. Yuen K. K. (1974). The Two-Sample Trimmed t for Unequal Population Variances. Biometrika, 61(1), 165. http://dx.doi.org/10.2307/2334299.
  46. 46. Tomczak M., & Tomczak E. (2014). The Need to Report Effect Size Estimates Revisited. An Overview of Some Recommended Measures of Effect Size. TRENDS in Sports Sciences, 21(1), 19–25. https://api.semanticscholar.org/CorpusID:73706075.
  47. 47. Vallat R. (2018). Pingouin: Statistics in Python. Journal of Open Source Software, 3(31), 1026. http://dx.doi.org/10.21105/joss.01026.
  48. 48. Virtanen P., Gommers R., Oliphant T.E. et al. (2020). SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17(3), 261–272. http://dx.doi.org/10.1038/s41592-019-0686-2. pmid:32015543
  49. 49. Waskom M. L. (2021). Seaborn: Statistical Data Visualization. Journal of Open Source Software, 6(60), 3021. http://dx.doi.org/10.21105/joss.03021.
  50. 50. Hunter J. D. (2007). Matplotlib: A 2D Graphics Environment. Computing in Science & Engineering, 9(3), 90–95. http://dx.doi.org/10.1109/MCSE.2007.55.
  51. 51. Neuman A. C., Haravon A., Sislian N., Waltzman S. B. (2007). Sound-Direction Identification with Bilateral Cochlear Implants. Ear and Hearing, 28(1), 73–82. http://dx.doi.org/10.1097/01.aud.0000249910.80803.b9. pmid:17204900
  52. 52. Killan C. F., Harman S., Killan E. C. (2018). Changes in Sound-Source Localization for Children with Bilateral Severe to Profound Hearing Loss Following Simultaneous Bilateral Cochlear Implantation. Cochlear Implants International, 19(5), 284–291. http://dx.doi.org/10.1080/14670100.2018.1479147. pmid:29843587
  53. 53. Poon B. B., Eddington D. K., Noel V., Colburn H. S. (2009). Sensitivity to Interaural Time Difference with Bilateral Cochlear Implants: Development Over Time and Effect of Interaural Electrode Spacing. The Journal of the Acoustical Society of America, 126(2), 806–815. http://dx.doi.org/10.1121/1.3158821. pmid:19640045
  54. 54. Laback B., Egger K., Majdak P. (2015). Perception and Coding of Interaural Time Differences with Bilateral Cochlear Implants. Hearing Research, 322, 138–150. http://dx.doi.org/10.1016/j.heares.2014.10.004. pmid:25456088
  55. 55. Kral A. (2013). Auditory Critical Periods: A Review from System’s Perspective. Neuroscience, 247, 117–133. http://dx.doi.org/10.1016/j.neuroscience.2013.05.021. pmid:23707979
  56. 56. Buck A. N., Buchholz S., Schnupp J. W., Rosskothen-Kuhl N. (2023). Interaural Time Difference Sensitivity under Binaural Cochlear Implant Stimulation Persists at High Pulse Rates Up to 900 pps. Scientific Reports, 13(1). http://dx.doi.org/10.1038/s41598-023-30569-0. pmid:36882473
  57. 57. Middlebrooks J. C. (2008). Cochlear-Implant High Pulse Rate and Narrow Electrode Configuration Impair Transmission of Temporal Information to the Auditory Cortex. Journal of Neurophysiology, 100(1), 92–107. http://dx.doi.org/10.1152/jn.01114.2007. pmid:18450583
  58. 58. Van den Bogaert T., Klasen T. J., Moonen M., Van Deun L., Wouters J. (2006). Horizontal Localization with Bilateral Hearing Aids: Without is Better Than With. The Journal of the Acoustical Society of America, 119(1), 515–526. http://dx.doi.org/10.1121/1.2139653. pmid:16454305
  59. 59. Alemu R. Z., Papsin B. C., Harrison R. V., Blakeman A., Gordon K. A. (2024). Head and Eye Movements Reveal Compensatory Strategies for Acute Binaural Deficits During Sound Localization. Trends in Hearing, 28. http://dx.doi.org/10.1177/23312165231217910. pmid:38297817
  60. 60. Mueller M. F., Meisenbacher K., Lai W.-K., Dillier N. (2013). Sound Localization with Bilateral Cochlear Implants in Noise: How Much do Head Movements Contribute to Localization? Cochlear Implants International, 15(1), 36–42. http://dx.doi.org/10.1179/1754762813Y.0000000040. pmid:23684420
  61. 61. Prete G., Marzoli D., Brancucci A., Tommasi L. (2016). Hearing It Right: Evidence of Hemispheric Lateralization in Auditory Imagery. Hearing Research, 332, 80–86. http://dx.doi.org/10.1016/j.heares.2015.12.011. pmid:26706706
  62. 62. Valzolgher C., Todeschini M., Verdelet G., Gatel J., Salemme R., Gaveau V., et al. (2022). Adapting to Altered Auditory Cues: Generalization from Manual Reaching to Head Pointing. PLOS ONE, 17(4), e0263509. http://dx.doi.org/10.1371/journal.pone.0263509. pmid:35421095
  63. 63. McAnally K. I., Martin R. L. (2014). Sound Localization with Head Movement: Implications for 3-D Audio Displays. Frontiers in Neuroscience, 8, 210. http://dx.doi.org/10.3389/fnins.2014.00210. pmid:25161605