Figures
Abstract
Technological advances in recent decades have intensified the need for strong language and literacy skills, such that deficits in these skills can significantly reduce occupational opportunities and social richness. Nonetheless, the causes of language and literacy deficits remain scarcely understood, so treatment consists mostly of drill on the very skills affected individuals struggle to perform. The purpose of this study was to test two related hypotheses. Hypothesis 1 was that delays in the development of the central auditory pathways greatly constrain acquisition of language skills dependent upon auditory development; these skills primarily involve those that are late emerging, namely phonological sensitivity. A corollary is that language skills that begin emergence early in life are less affected by delays in auditory development; this largely encompasses lexicosyntactic knowledge. Hypothesis 2 was that some conditions heretofore recognized as impacting language acquisition (poverty and premature birth, for the purpose of this study) take their toll at least in part by constraining the timely development of the central auditory pathways. To test these hypotheses, 104 children (5–6 years old) spanning continua of socioeconomic status and gestational age at birth were tested on (1) three measures of suprathreshold auditory functions associated with development of the central auditory pathways, (2) two measures of lexicosyntactic knowledge, and (3) two measures of phonological sensitivity. Results largely supported both hypotheses: Strong relationships were found between suprathreshold auditory functions and language measures, especially phonological sensitivity, and both socioeconomic status and gestational age appeared to exert their influence on language acquisition completely or partly through an effect on auditory function. These results should serve to refocus the search for causes of language and literacy deficits from purely environmental shortcomings to biological determinants, with newly inspired directions for interventional approaches.
Citation: Nittrouer S, Burchardt D, McEnery A, Antonelli J, Kachru R, Roig JC, et al. (2026) How auditory development affects language acquisition: Influences of socioeconomic status and gestational age at birth. PLoS One 21(2): e0341841. https://doi.org/10.1371/journal.pone.0341841
Editor: Claudia Männel, Max-Planck-Institut fur Kognitions- und Neurowissenschaften, GERMANY
Received: August 21, 2025; Accepted: January 13, 2026; Published: February 5, 2026
Copyright: © 2026 Nittrouer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data available through Open Source Framework under the title How auditory development affects language acquisition: Influences of socioeconomic status, prematurity, and otitis media: https://osf.io/a7y6m/overview.
Funding: This work was supported by Grant R01 DC020918 from the National Institutes of Health, National Institute on Deafness and Other Communication Disorders to Susan Nittrouer. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Problem statement
It is widely accepted that language is a uniquely human capacity. Members of other species communicate, but those communications are not as complex as human language, in either acoustic or linguistic structure. In spite of its greater complexity, however, human language necessarily evolved within the constraints of the mammalian auditory system, a system shared by all mammals. For individual children, spoken language must develop within the constraints of their own auditory capacities, but these capacities themselves change across childhood, so the constraints imposed on language development by those auditory capacities are ever changing. Thus, we may predict that if the development of those auditory functions is delayed, language acquisition will be delayed, as well. The investigation reported here was undertaken to explore hypotheses arising from that prediction. Specifically, these are: (Hypothesis #1) The timely development of auditory functions more strongly affects later-emerging language phenomena, such as sensitivity to phonological structure, than language phenomena that begin to emerge early, such as lexical and syntactic (i.e., lexicosyntactic) knowledge. Lexical, syntactic, and phonological knowledge were examined to test this hypothesis. (Hypothesis #2) Some conditions known to negatively influence language development actually exert that influence by interfering with the development of auditory functions. Two conditions associated with delayed or disordered language acquisition were invoked in this investigation as a way of exploring this second hypothesis; these were low socioeconomic status and premature birth. Although these conditions are often defined with somewhat arbitrary cut points (e.g., premature birth may be defined as anything less than 36 weeks gestation), we used continuous variables in this work based on the premise that the magnitude of effect should vary in a continuous manner. Including children along the continua of socioeconomic status and gestational age at birth also served to enhance variability in auditory and language measures, thus contributing to the successful testing of Hypothesis #1.
Auditory development and language acquisition
A newborn infant has auditory capacities that differ in many ways from those of adults: auditory thresholds are higher [1], frequency resolution is poorer [2, 3], and sensitivity to temporal modulation is poorer [4, 5], to name a few. Some of these functions quickly reach mature status; primarily those that rely on peripheral mechanisms [6]. But the development of other auditory capacities extends to puberty; these are functions involving the central auditory pathways. As an example of developmental effects in auditory capacities, Hall and Grose [7] obtained temporal modulation transfer functions for children between the ages of 4 and 10 years, as well as for adults. Transfer functions are computed from thresholds marking how deep the modulation needs to be at various rates of modulation for the listener to detect that modulation. To obtain these thresholds, investigators present stimuli consisting of noise modulated in amplitude across time by sinusoidally shaped envelopes. The depth of valleys in those envelopes is varied adaptively until an estimate is obtained of how deep the modulation needs to be for the listener to detect it. Thresholds are measured at varying rates of modulation from relatively low (e.g., 4 Hz) to quite high (e.g., 512 Hz). Fig 1 from Nittrouer and Lowenstein [8] illustrates mean thresholds for four groups of listeners across five modulation frequencies. These thresholds can be used to compute transfer functions, as shown in Fig 2, which displays temporal modulation transfer functions for the mean thresholds from the four groups with data shown in Fig 1. The height of the function at the intercept (in dB) is considered a metric of the listener’s sensitivity to this modulation; those values are indicated by the symbols at the left of the functions in Fig 2. The point where the function drops below 3 dB of that intercept (in Hz) serves as a metric of the resolution of the listener’s auditory system, meaning how tightly spaced the modulation can be for the listener to detect it; those values are indicated by the symbols towards the right in Fig 2. Resolution, largely a property of the peripheral auditory system, has consistently been found to reach adult levels early in development, certainly before four years of age [7, 9]. This finding is illustrated in Fig 2 by the fact that the 3-dB down point is similar (in frequency) across groups. Sensitivity to modulation, on the other hand, does not achieve mature status until roughly ten years of age – for children developing in typical fashion. This finding was reported by Hall and Grose, and is illustrated in Fig 2 by similarity (in dB) of intercept for the oldest children (8- to 10-year-olds) and the adults, but poorer (less negative) thresholds for 5- to 7-year-old children with no histories of chronic otitis media. Finally, these figures illustrate that 5- to 7-year-olds with histories of chronic otitis media (i.e., six or more episodes before age 3 years, as diagnosed by an otolaryngologist) had poorer sensitivity than age-matched peers without those histories. In the most severe cases, otitis media can temporarily raise auditory thresholds. As seen in Figs 1 and 2, this is one condition that can disrupt the timely development of at least one auditory function, sensitivity to temporal modulation. The current study examined two other conditions that could potentially have the same disruptive effects.
Fig 3 displays a speech waveform and offers insight into what this late development of sensitivity to temporal modulation could mean for language development. As can be seen, the continuous speech signal is amplitude modulated across time, with depth of modulation varying from very deep to relatively shallow. It is reasonable to suggest that the more sensitive a child is to temporal modulation, the better that child will be able to use patterns of amplitude change in the speech signal to recover linguistic structure. The fact that mature sensitivity to temporal modulation is not reached until 9 or 10 years of age, even for children developing on a typical timetable, means that younger children are processing speech signals without access to the same level of detail in these modulation patterns as adults and older children. Release from that constraint is delayed for children with delayed auditory development, such as the 5- to 7-year-olds with histories of otitis media whose data are displayed on Figs 1 and 2.
A similar situation exists for the development of sensitivity to spectral modulation. This modulation refers to the pattern of change in amplitude across the spectrum. As with temporal modulation, it is possible to separately examine resolution, meaning how tightly spaced spectral peaks in that envelope can be and still be distinguishable, and sensitivity, meaning how deep that modulation needs to be to be detected. And as with temporal modulation, resolution is dependent largely on maturation at the level of the cochlea and auditory nerve, a function that reaches maturity very early in life, likely in the first year [3,10]. Sensitivity to spectral modulation, on the other hand, emerges only gradually across childhood [11–14]. To measure this sensitivity, sinusoidally shaped envelopes are imposed on a signal with an otherwise flat spectrum. Unlike stimuli used to assess sensitivity to temporal envelopes, the stimulus remains constant in shape across time. The depth of these envelopes is varied adaptively to obtain thresholds. The rate of modulation can be manipulated experimentally, but low rates of modulation, around 0.5 or 1.0 cycles per octave (cpo), best match the rate of modulation in the speech signal. This fact is illustrated in Fig 4, showing the spectrum for a vowel. In speech signals, this modulation forms the formants, and they occur at a rate of slightly less than one per octave. Again, it is reasonable to suggest that keener sensitivity to this modulation will better support recognition of the speech signal, especially phonemic structure. Sensitivity to spectral modulation, however, reaches maturity even later than sensitivity to temporal modulation [12], leaving young children unable to fully access this modulation in the speech signal.
Beginning units
The initial unit of linguistic organization for the child is different from what the adult may use; for very young children, it is something resembling the whole word [15–19]. When infants enter the world, they have no pre-existing linguistic representations. A network of such representations must be assembled through the child’s interactions with the language in the environment, which is available only in the form of continuous speech. Thus, the very first task facing the child is to develop a strategy for extracting meaningful units from that continuous signal [16,18,20]. To be sure, the infant has several tools available to support this endeavor. First is the fact that the units most meaningful to the child will likely be those that are repeated most often, such as the child’s name or labels of objects relevant to the child’s life. Hearing a consistent acoustic pattern in conjunction with the presence of a specific object will facilitate the extraction of that particular stretch of acoustic signal and its subsequent establishment as a discrete lexical representation: e.g., the acoustic pattern comprising the word doggie consistently occurs when the family dog is present, albeit in different linguistic contexts. Other perceptual tools are available for this initial word learning, as well. For example, infants quickly learn to use language-specific stress patterns to isolate words [21]. Nonetheless, in the early years, children often mis-segment the acoustic signal, such that ‘words’ for an individual child may be idiosyncratic: e.g., gu-daw may become the label for dogs, if the infant consistently hears ‘good dog’ when the family dog is present. Once the infant realizes that objects have labels and actions have names, still other learning devices are recruited. The child will start pointing to or presenting objects to caregivers as a way of requesting the label. Critically important is the idea that these early words are holistically represented for the child, in production as well as in perception. Different authors have used varying terms to convey this idea. For example, Vihman and Velleman [22] called them ‘word recipes’ and Menn [23] called them ‘articulatory routines.’ But regardless of what term is used, the general idea is that words lack phonetic, articulatory, and acoustic detail in the child’s early representations. In fact, Charles-Luce and Luce [24] demonstrated that early lexicons are shaped by the child’s goal of keeping words in the lexicon maximally different in acoustic terms. These representations fit within the constraints imposed by auditory functions in these early years because sensitivity to detailed acoustic structure is not required. Over the course of much of childhood, these holistic representations are refined into segmental (i.e., phonological) structure, a process termed lexical restructuring [25–30].
Children are undoubtedly refining their lexical representations through the early years, but towards the end of preschool the process accelerates. Children begin to recognize word-internal structure more clearly and become facile at isolating and manipulating these briefer (phonological) units such as syllables and phonemes [17,25,27,31–34]. This change in sensitivity to linguistic structure coincides with burgeoning perceptual skills, including a shift in attention to the acoustic cues that are most informative regarding phonemic structure in the child’s native language [20,35]. The premise serving as the basis of the analyses reported here was that this period of relatively rapid phonological discovery at the end of the preschool years requires that auditory development is progressing in typical fashion and is sufficient by this age to support the discovery of phonological representations. It is hypothesized that children’s sensitivity to phonological structure during this critically important period will be strongly related to their sensitivity to acoustic structure, especially temporal and spectral modulation. It is further hypothesized that lexical and syntactic knowledge will not be as strongly related to sensitivity to acoustic structure, because words can be and in fact are represented more holistically for young children. Later in childhood, items entering the lexicon, such as those used in school, will come to rely more strongly on refined phonological representations as they become more complex [36,37], but for young children, they are underspecified compared to words in adults’ lexicons. Knowledge of syntactic rules – at least those involving word order – can be acquired by learning how to arrange the holistic lexical elements of early childhood. To test these hypotheses, children’s sensitivity to both temporal and spectral modulation was measured, and measures were collected of lexical and syntactic knowledge, as well as of phonological sensitivity.
Preschool critical periods
Many developmental phenomena do not show linear growth. Instead, periods of rapid emergence exist, with relatively stable performance before and after those periods. During these periods of rapid emergence there is typically wide variability among children as some develop the skill sooner within that window than others. If subjects are recruited at ages younger than that period of rapid emergence, little variability will be found across subjects because even those children developing typically will not have started to acquire the skill. And at later ages it might be that all children, even those late in developing the skill under study, will have achieved mastery. Thus, it is essential to select the age range at which maximum variability is expected for the skill being examined. For both the development of sensitivity to temporal and spectral modulation and acquisition of phonological sensitivity, that age is best defined as the end of preschool, as children are entering primary school and beginning to learn to read.
Results of the study cited earlier to describe temporal modulation transfer functions [8] can also be used to illustrate the concept of critical periods and to support the selection of the age range used in this study. Nittrouer and Lowenstein [8] investigated children’s sensitivity to temporal modulation, along with phonological sensitivity; 117 children between the ages of 5 and 10 years participated. The phonological sensitivity task in that experiment was a final consonant choice (FCC) task, in which the child was presented with a word that needed to be repeated. Three words were then presented and the child needed to identify the one that ended in the same sound.
Fig 5 displays the relationship between age and temporal modulation detection thresholds for signals with a 64-Hz modulation rate. Fig 6 displays the relationship between age and scores on the FCC task. Developmental changes are apparent and significant correlations with age are found for both measures: for temporal modulation detection, r(117) = −.449, p < .001, and for FCC, r(117) =.695, p < .001. A critical characteristic seen in these figures, however, is the developmental reduction in variability for both measures. This reduced variability is the characteristic that most strongly supports the selection of the age range of 5–6 years for the current study, because it is the age at which there is most variability among children for these auditory and language phenomena; in essence, it is the sensitive, or critical, period for learning these skills. The central question of this investigation was whether development of language knowledge in this age range, especially phonological sensitivity, is related to development of these auditory functions.
Maximizing variability
In the current study, variability in both language and auditory development was sought through subject selection. Two conditions were targeted that are known to be associated with language development: socioeconomic status and gestational age at birth. For each condition, there was reason to suspect that the development of the central auditory pathways (those above the level of the auditory nerve) is affected, and that was the hypothesis explored in this study. Each condition was treated as continuous in nature, rather than as discrete categories, on the assumption that effects should be continuous in magnitude.
Socioeconomic status is well recognized as a factor associated with language attainment. This construct is typically defined as a composite value of the educational level and occupational prestige of the primary income earner in the home, although maternal education sometimes serves as the independent variable. Children living in abject poverty have reliably been found to perform more poorly on measures of language ability than their middle-class peers [38–43], but evidence shows that socioeconomic status influences child-language outcomes even at the mid- to high-end of the socioeconomic spectrum. Hoff [44] reported poorer language outcomes for children whose parents had no education beyond high school than for children whose parents had graduated from university. And although poverty has been found to negatively impact other functions, such as attention and inhibition, it takes its heaviest toll on language [45]. Traditionally, this delay has been attributed to social determinants, mostly deficiencies in the quantity and quality of language models available in the child’s environment [46–49]. Correlations between amount of language input and child language performance, however, usually fail to explain more than 30 percent of the variance in language abilities (e.g., [49,50]), leaving open the question of what other poverty-related factors explain the poor language outcomes. We suggest that delayed development of the central auditory pathways may explain some of that variance, a suggestion motivated by findings showing that poverty is associated with changes in brain structure, including reduced gray matter in the frontal and temporal cortices, as well as in the hippocampus [51, 52]. Socioeconomic status has also been associated with neural activity at the cortical level, such that higher socioeconomic status promotes stronger activity in language-relevant regions [53]. Accordingly, it was reasonable to propose that socioeconomic status might specifically affect the development of the central auditory pathways, and to examine that proposal with behavioral measures of function in those pathways. Of special interest is evidence that poverty specifically disrupts development of specialization of the inferior frontal gyrus (Broca’s area) [54]; this finding provides further support that socioeconomic status may be especially related to development of phonological sensitivity.
Children born prematurely demonstrate both poorer language abilities and auditory functioning than children born full term. For example, children born at all levels of premature status perform more poorly on language measures than their full-term peers [55–57], and these deficits are observed even when subjects born prematurely are matched to full-term subjects on other factors, including maternal education and IQ [58]. In fact, there is evidence that these language difficulties increase as children mature [59]. Where auditory functions are concerned, children born prematurely display “unstable” auditory performance through at least the first year of life, meaning auditory-evoked potentials vary in an unpredictable manner [60]. As late as 8–10 years of age, it has been observed that premature children demonstrate poor temporal ordering and gap-in-noise detection [61]. These auditory deficits are present even when auditory thresholds are within the normal range [62], indicating that they are most likely due to impaired central auditory functions. The altered acoustic environment of the neonatal intensive care unit, compared to the intrauterine acoustic environment, is typically considered the likely cause of the language and auditory deficits exhibited by children born prematurely [63,64]. However, results from animal models of premature birth, where other variables are kept constant, show that the simple act of being removed from the womb early results in neurological deficits [65].
Summary
Language is a uniquely human trait, but it is not always acquired without challenges. Delays or deficits in language and literacy development are the most common problems children face – more common than autism or sensorineural hearing loss [66]. If not effectively treated, language deficits in childhood can become lifelong problems, diminishing almost every aspect of an individual’s life. The investigation described here was undertaken to explore the broad proposal that delays in the development of the central auditory pathways can restrict language development, leading to deficits in language abilities. To explore this proposal, two auditory functions were examined as potential markers of that auditory development: sensitivity to temporal and spectral modulation. The specific proposal offered here is that sensitivity to acoustic structure in the speech signal is more important for the development of phonological representations than for lexicosyntactic abilities. To examine this proposal, measures of lexical, syntactic, and phonological abilities were obtained across subjects. Variability in both language and auditory performance was maximized by recruiting subjects across a set of conditions predicted to impose variability in developmental timing for both kinds of phenomena.
Two specific hypotheses were tested in this work: (#1) The timely development of auditory functions more strongly affects later-emerging sensitivity to phonological structure than earlier-emerging lexicosyntactic abilities; and (#2) Two conditions that have been shown to negatively influence language development actually exert that influence – at least in part – by interfering with the development of auditory functions, namely, socioeconomic status and gestational age at birth.
Methods and materials
All methods used in this study were approved by Institutional Review Board 1 (IRB1) of the University of Florida under protocol IRB202301499 prior to the start of data collection. Approval was obtained on November 6, 2023, and recruitment began on November 7, 2023. Recruitment is ongoing.
Participants
Subjects were recruited for this study through several methods. Both paper and digital flyers were distributed to schools, community centers, libraries, and medical clinics. Distribution efforts were strongly focused in low-income neighborhoods. Additionally, electronic records of the university medical system were searched for diagnostic codes associated with premature birth for patients whose parents had agreed at a previous clinic visit to be contacted about potential research studies. Parents of the patients identified through those efforts were contacted about possible participation through letters and emails. To qualify, children needed to be between 5;0 and 7;0 years of age. They had to have normal hearing and no frank disabilities that are known to affect language development, other than possibly low socioeconomic status or premature birth. They could not have experienced an intracranial hemorrhage at any time, and could not have had more than five diagnoses of otitis media of any sort made by an otolaryngologist before the age of 3 years. From this process, parents of 114 children contacted us about possible participation. Four of these children came to the laboratory but were dismissed before being consented: one because of active otitis media on test day and three because they presented with autism. Thus, 110 children were consented. One of these children was dismissed partway through testing due to a conduct disorder and another because it became clear partway through testing there was a significant developmental delay. One child’s data were more than 3 SDs from the mean of the other children on all tasks, so that child’s data were not included. Review of medical records for three children revealed periods of raised auditory thresholds earlier in life, so their data were not included in these analyses. In total, data from 104 children were included.
Fifty-one percent (53) of these subjects were male and 49 percent (51) were female. Twenty-eight percent (29) of these subjects were identified by their mothers as Black, 52 percent (54) as White, 3 percent (3) as Asian, and 17 percent (18) as Multiracial. All children identified as Multiracial had one parent who identified as Black and one parent who identified as White. All children included in the study passed a hearing screening consisting of the octave frequencies between 0.5 and 4.0 kHz at 20 dB hearing level for both ears, as well as an otoacoustic emissions screening. None of the children had flat tympanograms at the time of testing. All children were administered the figure-ground subtest of the Leiter International Performance Scale – Revised [67] and had scaled scores above 7 (i.e., better than −1 SD). Forty-nine percent (51) of the children had public health insurance (i.e., Medicaid).
Table 1 displays means, medians, and SDs for several independent variables. Socioeconomic status was a two-factor index, computed for each child using a formula described previously by Nittrouer and Burton [68]. This index was based on that of Hollingshead [69] and involves scaling from 1 to 8 both the educational attainment and occupational status of the parent in the household with the highest income; these values are multiplied together to produce an index between 1 and 64 (low to high). The index we used differs from that of Hollingshead primarily in that occupations were updated to incorporate technical occupations that did not exist in the 1950s and to remove occupations that no longer exist. The values used to compute socioeconomic status were obtained from the subject’s mother. In general, scores below 15 indicate abject poverty where the primary income earner has at most a high school diploma and is unemployed or does not have stable employment. Thirty-six percent (37) of the children in this study were in this category. Scores between 16 and 29 generally indicate that the highest income earner has a high school diploma or possibly an associate’s degree and is working in a service industry or in a retail sales position; they may or may not have stable employment. Nineteen percent (20) of the children in this study were in this category. Scores between 30 and 42 generally indicate that the primary income earner has a four-year university degree and a stable career at a level that is considered middle class. Twenty-eight percent (29) of the children in this study were in this category. Scores above 42 indicate that the primary income earner has at least a master’s degree or more often a higher degree and a more prestigious job, such as a university professor, medical doctor, or attorney. Seventeen percent (18) of the children in this study were in this category.
Maternal education was obtained from mothers, as well. Although the education values used in the computation of socioeconomic status were for the highest income earner, which could mean the child’s father, maternal education correlated highly with socioeconomic status, r(104) =.822, p < .001. Values for gestational age at birth, birth weight, and time in the neonatal intensive care unit were obtained from the subject’s mother, and were confirmed from the children’s medical records. Both birth weight and time in the neonatal intensive care unit correlated highly with gestational age at birth: birthweight, r(104) =.823, p < .001, and time in the neonatal intensive care unit, r(104) = −.779, p < .001.
Equipment
Hearing screenings were performed with a Grason-Stadler Gsi 61 audiometer and TDH-39 headphones. These hearing screenings, as well as testing for the modulation detection tasks, auditory comprehension of language, and the phonological sensitivity tasks took place in a soundproof booth. Figure-ground and vocabulary testing took place in a quiet room. Tympanometry was performed with an Interacoustics Titan. Otoacoustic emissions were obtained with a Grason-Stadler Corti. Stimuli for both the modulation detection and phonological sensitivity tasks were presented through a computer, with a Creative Labs Soundblaster soundcard, a Samson C-Que 8 amplifier, and AKG-K141 headphones. Children’s responses for the figure-ground, vocabulary, auditory comprehension of language, and phonological tasks were video recorded using a SONY HDR-XR550V video recorder so that responses could be checked later by independent staff members.
Stimuli and task-specific procedures
Audiometric Testing.
Children had their hearing screened using behavioral methods at the octave frequencies between 0.5 and 4.0 kHz at 20 dB hearing level for each ear. In addition, distortion-product otoacoustic emissions were obtained for 2.0, 3.0, 4.0, and 5.0 kHz and tympanograms were obtained. Each child needed to pass the behavioral screening and obtain a pass (rather than a refer) with otoacoustic emissions. A Type B tympanogram was an exclusionary criterion.
Modulation detection.
Stimuli for measuring temporal modulation detection were generated in Matlab and consisted of broadband noise (0.05 to 8.0 kHz). Standard stimuli had flat envelopes. Target stimuli were sinusoidally amplitude modulated in the time domain at one of two rates: 8 Hz or 64 Hz. Both these rates are below the 3-dB cut-off found for listeners in [8], as shown in Fig 2. All stimuli had 20-ms cosine-squared onset and offset ramps. RMS amplitude was equalized across stimuli and duration of all stimuli was 500 ms. Modulation depth (m) varied between 0 and 1, and is described in dB derived from 20*log(m). Initial depth was 0 dB (maximum depth), with a step size of 4 dB for the first 2 reversals, and 1 dB for the next 8 reversals. These stimuli were presented at 68 dB sound pressure level. Detection thresholds were defined as the means of the last eight reversals. More negative thresholds represented better temporal modulation detection.
Stimuli for measuring spectral modulation detection were also generated in Matlab. Each stimulus consisted of 800 sinewave components of equal amplitude, logarithmically spaced between 0.1 and 5.0 kHz. Starting phase of each component was randomly selected on each trial. These stimuli had flat spectra and were used as the standards. Target stimuli were created by applying a sinusoidal envelope with a modulation rate of 0.5 cpo to the spectrum. Starting phase of that modulated envelope changed randomly across trials. The overall envelope was further shaped in two ways. First, amplitude of the lowest frequency components increased gradually, and amplitude of the highest frequency components decreased gradually. Second, a speech-shaped envelope [70] was applied. These measures ensured that there were no large amplitude discontinuities at stimulus edges. RMS amplitude was equalized across stimuli, and duration of all stimuli was 500 ms. Modulation amplitude for these stimuli is given in dB change from peak to trough. In this task, the original depth of modulation was 30 dB peak to trough and step size was 4 dB for the first two reversals. Step size then changed to 2 dB where it remained for eight more reversals. These stimuli were presented at 68 dB sound pressure level, roving by +/-3 dB. The detection threshold was defined as the mean of the last eight reversals.
A three-alternative, forced-choice procedure created in Matlab was used in a two-down, one-up adaptive paradigm to obtain the thresholds for 70.7% correct detection for both temporal and spectral modulation. Two runs of each of the three stimulus sets were presented, and the mean thresholds across the two runs was used as measures in further analysis.
In this task, the child saw a display on the computer monitor of either three cartoon robots or three cats in a horizontal array. The child was told that one of the characters made a different sound from the others. As each of the three stimuli was played, each of the characters pulsed, in turn from left to right. The child needed to identify the one that was different.
Prior to testing, the child was given practice identifying stimuli at the greatest modulation depth. The child had to identify the modulated stimulus correctly on nine out of ten consecutive trials without prompting to move to testing. They had a maximum of 30 stimuli in which to meet this criterion. Six children failed to meet this practice criterion for 8-Hz temporal modulation, 16 failed to meet the criterion for 64-Hz temporal modulation, and 18 children failed to meet the criterion for spectral modulation. If a child failed to meet the practice criterion, maximum depth of modulation was used as their threshold for that run on the premise that it was the best estimate possible. Because all children who failed to pass training on one of these runs did pass on at least one of the other tasks, it was concluded that these children could perform a three-interval forced choice procedure.
Several steps were taken to ensure that children were attending to the task throughout testing. First, ‘easy’ trials consisting of target stimuli with the deepest modulation were presented every four to six trials. Children were required to identify these stimuli correctly (i.e., no more than two incorrect responses per run) for their data for that run to be included in analysis, a criterion they all met. In general, children did not make errors on these easy trials: mean numbers of errors were less than 1.0 for all three modulation conditions. The possibility was considered, however, that problems in attention might arise only near threshold. To assess that possibility, two additional measures were obtained across the last eight reversals: the SD of those reversals and the mean length of excursion (MLE), which was the mean distance between each consecutive pair of reversals for the last eight reversals.
Fig 7 shows adaptive tracks from one child. The top track is for temporal modulation at the 8-Hz modulation rate and the bottom figure is for spectral modulation. The red circles show the reversals. The green triangles indicate that responses to all easy trials were correct.
Red circles are reversals and green triangles are responses to easy trials.
Vocabulary.
Vocabulary knowledge was measured with the Expressive One-Word Picture Vocabulary Test – 4th Edition [71]. In this task, a series of pictures are shown and the child is asked to label each one. The vocabulary items become less common as the test proceeds. Testing stopped after six consecutive errors. All testing was recorded and reviewed later to ensure correct scoring. Standard scores are given in Table 2 to provide a picture of the sample, but raw scores were used in analyses.
Auditory comprehension of language.
Children’s abilities to comprehend spoken language were assessed using the auditory comprehension subtest of the Preschool Language Scales – 5 [72]. This task requires the child to demonstrate an understanding of spoken language by performing specific commands given by the examiner. Testing stopped after six consecutive errors. All testing was recorded and reviewed later to ensure correct scoring. Standard scores are shown in Table 2, but raw scores were used in analyses.
Phonological sensitivity.
Children were administered two tasks assessing phonological sensitivity. This sensitivity emerges at different developmental stages according to level of structure [73,74], with sensitivity to syllable units appearing first, then rimes, followed by syllable onsets, and finally syllable offsets. For 5- and 6-year-olds, sensitivity to rimes and syllable onsets is most strongly emergent, so tasks were selected to assess these levels of sensitivity. A rhyme oddity task was used to assess the first of these, children’s sensitivity to rime structure. In this task, the child saw three cartoon faces on the computer monitor and heard three words. The faces pulsed in order as each word was presented, from left to right. The child’s task was to select the word that did not rhyme. Six practice items were provided. The experimenter initially coached the child, if needed. If the child could not respond correctly to any of the practice trials without coaching the first time they were presented, these practice trials were repeated. Testing then commenced. There were 36 items in this test, and testing was discontinued if the child responded incorrectly to six consecutive trials. The percent correct score was used as the dependent measure.
The other task of phonological sensitivity assessed children’s abilities to match sounds at the beginnings of words. In this task, the initial consonant choice (ICC) task, subjects were first presented with a target word, which they had to repeat. Auditory presentation of this target word was accompanied by the presentation of a cartoon face in the middle of the computer monitor. Next the child saw three faces and heard three word choices. The faces pulsed one at a time from left to right as each word was presented. The child had to select the one that started with the same ‘sound’ (i.e., consonant) as the target word. Before testing, six practice trials were presented. Then testing commenced. Testing was discontinued if the child responded incorrectly to six consecutive trials. There were 48 items. The percent correct score was used as the dependent measure.
Stimuli in both phonological sensitivity tasks were presented at 68 dB sound pressure level. All testing was recorded and reviewed later to ensure correct scoring.
General procedures
Children came to the laboratory with their mothers. All testing was completed in one day during four sessions with breaks between each session. Lunch was provided between the second and third sessions. Consent was obtained before testing started. The consent form was sent to parents at the time they scheduled participation, so they had time to read it before coming to the laboratory. When they arrived, the mother and child were given a ‘tour’ of the laboratory, showing them where they would be participating; the tasks were explained. They were given the opportunity to ask questions. Parents signed the consent form on behalf of their children, and children gave verbal assent, witnessed by a staff member.
In the first session, audiologic testing was completed, along with the figure-ground and the vocabulary tasks. In the second session the three modulation sensitivity measures were administered, along with the ICC task. The third session consisted of the auditory comprehension of language task and the rhyme oddity task. The last session consisted of the three modulation sensitivity measures again. Parents were compensated $100 for their participation in the study. Children got a small toy at the end of each session and a T shirt at the end of the day.
Results
SPSS was primarily used for statistical analyses. Data screening demonstrated that the four language measures and the three measures of auditory function were normally distributed.
Description of results
Table 2 displays means, medians, and SDs for the two lexicosyntactic measures, the two phonological sensitivity measures, and the three measures of modulation detection. Four of these scores can be compared to those from the 5- and 6-year-olds whose data are included in Figs 5 and 6: vocabulary, ICC, temporal modulation detection at 64 Hz, and spectral modulation detection. The same vocabulary test was administered in that study as was administered in the current study, and ICC was administered, as well as FCC in that previous study. Means (and SDs) for children from [8] are shown in Table 3; these scores are only for the children with no histories of chronic otitis media. In comparing scores in Table 2 for children in the current study with scores in Table 3 for children in that earlier study it can be seen that, overall, children in the current study performed more poorly. This difference is likely because scores for the children in the current study include children varying in socioeconomic status and gestational age at birth; these children from that previous study had a mean socioeconomic score of 34.3 (median = 35) and none were born prematurely. Nonetheless, the scores in Table 3 provide benchmarks of best performance for children in this age range, meaning for middle-class children who were not born prematurely and did not have any other risk factors for language delay.
The measures collected to examine children’s response reliability were examined, as well. These were the measures of SD and MLE for the last eight reversals. Table 4 shows means, medians, and SDs for these values in each of the three conditions. Clearly, these values are not much larger than the 1-dB step size used in measuring sensitivity to temporal modulation or the 2-dB step size used in measuring spectral modulation. In sum, these 5- and 6-year-olds were able to maintain attention to these tasks.
Tests of Hypothesis #1
This hypothesis was that the development of the suprathreshold auditory functions examined here more strongly affects acquisition of phonological sensitivity than lexicosyntactic knowledge. As prelude to testing this hypothesis, Pearson correlation coefficients were computed between each of the three modulation detection thresholds to see if all these measures were assessing the same phenomenon, presumably the development of the central auditory pathways. For the two measures of temporal modulation detection, a high correlation was found, r(104) =.756, p < .001. Where spectral modulation detection was concerned, high correlations were also found: for 8-Hz temporal modulation, r(104) =.658, p < .001; and for 64-Hz temporal modulation r(104) =.647, p < .001. Thus, it was concluded that all three measures of modulation detection largely evaluate the same phenomenon; nonetheless, the way that these specific aspects of auditory function support language acquisition may differ.
Moving to actual tests of the hypothesis, Table 5 displays Pearson correlation coefficients for each of the three measures of modulation detection and each of the lexicosyntactic and phonological sensitivity measures. All these correlation coefficients are significant, and medium to large in effect size. There is also a clear pattern across language tasks: Each modulation detection threshold had the smallest correlation coefficient for vocabulary, and the highest for one or the other phonological sensitivity task: ICC or rhyme oddity. Correlation coefficients between the modulation detection tasks and the auditory comprehension task were intermediate with those for vocabulary and the two phonological sensitivity measures, except where auditory comprehension is concerned: that correlation with 64-Hz temporal modulation is slightly higher than the correlation between ICC and 64-Hz modulation.
Although informative, these individual correlation coefficients do not account for the likelihood that the three measures of auditory function had overlapping effects on the language measures. Therefore, stepwise regression was conducted to examine the unique contributions of each auditory function to each of the lexicosyntactic and phonological sensitivity measures. These results, shown in Table 6, reveal that spectral modulation detection explained the most variability in three of the four language measures; temporal modulation detection at 8 Hz explained the most variability in the fourth, rhyme oddity. For all language measures, temporal modulation detection at 64 Hz explained no additional variability once spectral modulation detection thresholds and thresholds for temporal modulation detection at 8 Hz were incorporated. Finally, reflecting the pattern of results for the correlation coefficients shown in Table 5, it can be seen in Table 6 that the auditory functions explained the least amount of variability for vocabulary and the most for the two measures of phonological sensitivity; the amount of variability in auditory comprehension explained by these auditory functions was intermediate.
The results described above demonstrate that these suprathreshold auditory functions explain a moderate amount of variability in both the lexicosyntactic and phonological sensitivity measures. The crux of Hypothesis #1, however, was that these auditory measures would explain more variability in phonological sensitivity than in lexicosyntactic knowledge. From Tables 5 and 6 it appears the hypothesis was supported. In Table 5, it is found that thresholds for all three measures of modulation detection were more strongly correlated with ICC and rhyme oddity than with vocabulary or auditory comprehension, except for 64-Hz temporal modulation, which was more strongly correlated with auditory comprehension than with ICC. In Table 6, R2 was greater for ICC and rhyme oddity than for vocabulary or auditory comprehension, indicating that these auditory functions explained more variability in the phonological sensitivity measures than in the lexicosyntactic measures. Nonetheless, it was necessary to examine whether these differences in correlation coefficients between the kinds of language skills were significant in order to support fully the conclusion that the auditory measures were more strongly related to phonological sensitivity than to lexicosyntactic knowledge. To do so, we considered each measure of modulation detection separately, comparing its relationship with each lexicosyntactic measure to its relationship with each phonological sensitivity measure. Using R software, test statistics were calculated as the difference between the two correlation coefficients divided by the corresponding standard error. Standard errors and p-values were calculated using nonparametric bootstrapping procedures. Derived test statistics are shown in Table 7. Using an alpha of.05, three of these comparisons are significant, all involving the vocabulary measure: the correlation of spectral modulation and ICC is greater than the correlation of spectral modulation and vocabulary, plus the two correlations involving 8-Hz temporal modulation are greater for phonological sensitivity than for vocabulary knowledge. Although the test statistic for spectral modulation and vocabulary versus rhyme oddity does not reach the.05 probability level, it is close and less than.10. Thus, the pattern of results that emerges from this set of test statistics matches what has been seen in Tables 5 and 6: the relationships between the auditory functions and vocabulary knowledge are the weakest and most likely to be significantly different from what is found for the auditory functions and phonological sensitivity. Although correlation coefficients were smaller, in general, for the auditory functions and auditory comprehension, they were not statistically different from those found for auditory functions and phonological sensitivity.
Tests of Hypothesis #2
The second hypothesis tested in this study was that factors previously shown to negatively impact language acquisition would be found to take their toll in part or wholly through their effects on the development of auditory functions. To test this hypothesis, we first correlated the independent variables of socioeconomic status and gestational age at birth with both our language and auditory measures. Table 8 displays Pearson correlation coefficients between each of the two independent variables and each of the lexicosyntactic and phonological sensitivity measures, as well as the measures of auditory function. Both socioeconomic status and gestational age at birth were significantly correlated with all four language measures, as well as with the measures of auditory function.
Socioeconomic status is a major source of variability in language acquisition, and the mechanism typically invoked to explain this effect involves language models in the environment, which are lacking in both quantity and quality for children in poverty (e.g., [46]). We hypothesized that socioeconomic-related effects on the development of the central nervous system, including the auditory pathways, also contribute to the observed relationship between socioeconomic status and language acquisition. Comparing the coefficients displayed in Table 8 to those displayed in Table 5 provides some support for that hypothesis. Table 5 shows that auditory functions are strongly associated with language measures. Table 8 shows that socioeconomic status is significantly correlated with both auditory functions and language measures, although the strength of the association with language measures is typically less than the association between auditory functions and language measures. The combination of these results could be explained by the fact that the effect of socioeconomic status on language measures is at least partly through an effect on auditory functions, which in turn affects language development. Testing this directly would require mediation analysis, which we could not do because it requires repeated data over time for a stable sample. Instead, we took an alternative approach, which was to do partial correlation analysis to see if the associations between socioeconomic status and language measures decreased once we accounted for auditory functions. Similarly, we performed partial correlation analysis to see if the associations between auditory functions and language development decreased once we accounted for socioeconomic status (so all the other factors that covary with it). If the correlations between socioeconomic status and language development were found to be much smaller after accounting for auditory functions, but the correlations between auditory functions and language development remained high, even after adjusting for socioeconomic status, this would provide evidence that the effects of socioeconomic status on language development are partly through an effect on auditory functions. Similarly, if socioeconomic status was found not to have a significant partial correlation with language development after adjusting for auditory functions, it would suggest that all of the effect of socioeconomic status on language development is through an effect on auditory functions.
The choice of which modulation detection threshold to include in each analysis was based on the stepwise regression results shown in Table 6: For vocabulary knowledge, only thresholds for spectral modulation detection were included in the analysis because it was the only auditory function found to account for any significant amount of variability in vocabulary scores. For the other measures, detection thresholds for both spectral modulation and temporal modulation at 8 Hz were included, because each of these measures of auditory function accounted for significant amounts of variability. Table 9 shows results of these partial correlation analyses and reveals that both socioeconomic status and the modulation detection thresholds explained significant amounts of variability in the lexicosyntactic and phonological sensitivity measures, after controlling for the other. Nonetheless, it is also apparent that the auditory functions explained more variability when socioeconomic status was controlled (bottom row) than vice versa (top row). Thus, within the limits of this analysis, support is garnered for the hypothesis that socioeconomic status impacts language acquisition in some part through its negative effects on the development of the central auditory pathways.
Gestational age at birth is another factor known to be associated with language development. As with socioeconomic status, we were interested in examining the extent to which gestational age at birth and modulation detection were independent sources of variability in the language measures. To examine this question, we conducted partial correlation analyses, alternately controlling for either gestational age at birth or the relevant modulation detection thresholds, as we did with socioeconomic status. These results are shown in Table 10. In this case, gestational age at birth did not account for a significant amount of variability, once auditory functioning was accounted for. Modulation detection thresholds continued to account for significant amounts of variability. Therefore, within the limits of this analysis, it can be concluded that the primary pathway of effect of premature birth on language development is through its effect on the development of the central auditory nervous system.
Discussion
This study was undertaken to explore the general proposal that rate of development of the central auditory pathways exerts an influence on language development by supporting or constraining the emergence of the language skills most dependent on auditory functioning. To explore this proposal, measures of temporal and spectral modulation detection were obtained, along with measures of language abilities from children in an age range when development of both sorts of skills should be maximally variable (i.e., during a sensitive period). Two specific and testable hypotheses were posed: (#1) The timely development of auditory functions more strongly affects later-emerging sensitivity to phonological structure than lexicosyntactic abilities, which begin to emerge earlier; and (#2) Several conditions known to negatively influence language development actually exert that influence, at least partly, by interfering with the development of auditory functions; conditions considered were socioeconomic status and gestational age at birth.
The first hypothesis was partly supported. Correlation coefficients were slightly, but consistently, stronger between the measures of auditory function and measures of phonological sensitivity than between measures of auditory function and lexicosyntactic knowledge, especially vocabulary, although these differences were not necessarily significant. An important nuance of these outcomes is that the reason this first hypothesis was only partly supported was that the correlations between auditory functions and lexicosyntactic knowledge were stronger than might have been expected. In fact, spectral modulation detection explained as much or more variability in lexicosyntactic knowledge than the independent variables considered in this study that are generally thought to be strongly related to language development: socioeconomic status and gestational age at birth. And it is not the case that weaker relationships were observed in this study than in previous studies between these independent variables and scores on the lexicosyntactic tasks. Results for socioeconomic status best illustrate trends across studies. Correlation coefficients between socioeconomic status and language performance are typically around.3. For example, Fernald et al. [50] found a correlation coefficient of.34 between vocabulary and socioeconomic status, as indexed by the Hollingshead index [69]. Here we obtained a correlation coefficient of.429 between vocabulary and our measure of socioeconomic status, which derived from the Hollingshead index. Thus, the effects of socioeconomic status on lexicosyntactic knowledge in this study resembled – or were somewhat greater than – what has been previously reported. All the same, effects of spectral modulation detection on vocabulary in this study were equal to, or slightly higher than the effects of socioeconomic status. Where auditory comprehension is concerned, all three measures of auditory function explained more variability in scores on this measure than did socioeconomic status or gestational age at birth. And in this case, none of the comparisons of correlation coefficients between the auditory measures and auditory comprehension versus phonological awareness were significant, revealing that the measures of auditory function explained close to the same amount of variance in auditory comprehension of language as in phonological sensitivity; this differs from findings for vocabulary, where four of the six test statistics were significant, or close to significant. This difference in outcomes for vocabulary and auditory comprehension might be explained by the fact that the auditory comprehension subtest of the Preschool Language Scales has some items that either explicitly test phonological sensitivity or indirectly test that sensitivity through measures of comprehension of inflectional morphemes. As a consequence, some part of the score on this auditory comprehension task likely reflects the phonological sensitivity of the test taker. Regardless, even though the finding that the development of these auditory functions was related to lexicosyntactic acquisition to some extent may weaken support for the first hypothesis, it strengthens the overall proposal that timely development of the central auditory pathways is critical to language acquisition.
Moderate support is also provided for the second hypothesis, that conditions previously found to influence language development may exert that influence (at least partly) by interfering with the development of auditory functions. Where socioeconomic status is concerned, partial correlation analysis revealed that the effects on language skills of the auditory functions and other factors related to socioeconomic status were to some extent independent: Each explained some unique variability in the language scores, although auditory functions explained more variability. Correlation coefficients between gestational age at birth and the language measures – lexicosyntactic and phonological – were also statistically significant. In this case, partial correlation analyses revealed that those effects of gestational age on the language measures ceased to be significant when auditory functions were controlled. Thus, gestational age – so premature birth – appears to take its toll on language acquisition entirely through whatever effect it has on the development of the central auditory pathways.
Limitations
The major limitations of the current study were that it was largely correlational in design, and not all independent variables that may influence language outcomes were measured. Where socioeconomic status is concerned, no measures of parental language input were obtained for these particular children. The assumption was largely made here that language input serves as a social determinant of language, and the auditory functions we examined are biological determinants. However, it is possible that variability in the language children hear affects development of central nervous system structures, including the auditory pathways [75]. Future investigations will need to refine our understanding of the directions of effect of the variables associated with socioeconomic status and gestational age at birth, of which there are many. In the meantime, this study contributes to our current understanding of language acquisition and the factors that can promote or constrain it by demonstrating that the timing of development of certain suprathreshold auditory functions influences the development of language skills, especially those related to phonological sensitivity.
Implications
The outcomes of this study contribute to models of language development and delay in several ways. There is likely no issue more controversial in the field of communication sciences and disorders than whether disruptions in auditory function beyond peripheral sensitivity can explain developmental language delays and disorders. Numerous studies have explored this idea that auditory dysfunctions are responsible for delays in language and literacy acquisition (e.g., [76–79]). The choice of auditory function and the selection of the language or literacy measure to include in these studies has varied, often without a clear vision of how the chosen auditory function would be expected to support acquisition of the specific language or literacy phenomenon being measured. In designing this study, we started with the selection of the language measures – lexical, syntactic, and phonological – from the perspective that phonological sensitivity begins to emerge later than children begin learning words and how to order them in sentences. Then we worked backwards to consider the auditory functions that would be the most likely candidates to support that later-emerging language skill of phonological sensitivity. In this way we were able to design a precise test of whether specific auditory functions support a specific language skill. Moreover, we carefully selected an age range in which it appeared that development of the specific functions under study would be progressing at the most rapid rate – an apparent sensitive period for the development of these auditory functions and phonological sensitivity. This age range should, it was predicted, provide the greatest variability across children and allow the strongest test of whether the emergence of these auditory functions supports, and so potentially constrains, the acquisition of phonological sensitivity. Generally speaking, the outcomes of these analyses supported the hypotheses offered.
Implications of these findings, however, are not limited to the age range studied here or to the childhood conditions examined. The notion of sensitive periods in development is well accepted where language acquisition is concerned (e.g., [80,81]), although the idea that there would be only one sensitive period for this development is generally eschewed (e.g., [82,83]). It is more likely that different components of language functioning emerge in a relatively consistent order, with periods of rapid emergence for each of those components (e.g., [84,85]). The auditory functions and phonological sensitivity phenomena examined in this study appear to emerge in large part during the age range studied. Other skills at least start to develop at earlier ages, and still other skills will only start to develop at later ages. The children in this study who were poor at the tasks examined – so were delayed in the development of these auditory and language skills – might nonetheless acquire these particular skills, albeit at a slightly older age. The implications of this late development are currently uncertain, but it seems a reasonable prediction that there would be a domino effect in which these children will be delayed with each emerging auditory and language function. The question becomes one of whether there is an age at which language acquisition ends, regardless of whether or not a child has successfully navigated each of those sensitive periods. Moreover, delays in auditory and language development surely have negative consequences for academic pursuits. If a child does not have the language skills expected at each grade level, the child will not be able to meet expectations for that grade.
Implications of these findings also extend to children beyond those who may be living in poverty or who may have been born prematurely because these conditions can co-occur with other risk factors for language delay. For example, children with congenital hearing loss may be living in poverty or may have been born prematurely; in fact, premature birth is a risk factor for hearing loss [86]. Because these conditions of poverty and premature birth were found, through behavioral measures, to hinder the development of central auditory pathways, we can predict that children with hearing loss who also experience one of these conditions will have poorer outcomes than children with hearing loss otherwise have.
One of the most significant implications of the findings from this investigation should concern how we view the effects of poverty on language acquisition. Unquestionably the predominant model of how poverty affects language acquisition is one involving social determinants. According to this model, the language environments of children in poverty are less supportive of language acquisition than the environments of children not living in poverty. In particular, it has been documented that women (mothers) in poverty talk to their children differently than women not living in poverty, by using more directives with no expectation of responses from their children and fewer linguistic devices that support the continuation of interactions (e.g., [46–49,87]. This model, however, has always suffered from a lack of large effect sizes. Moreover, even when differences in language input styles are controlled, children in poverty still demonstrate poorer language, with a lot of variance left unexplained. These results prompted Perkins et al. [41] to ask “…if poverty has a unique effect on language development, what is the mechanism through which poverty, controlling for other factors, influences language?” (p. 16). The study reported here helps to answer that question: At least one mechanism through which poverty influences language is its effect on the development of the central auditory pathways.
Another, though not mutually exclusive, proposal for how poverty affects language acquisition is the idea that poverty imposes stress on all members of the family unit [88]. That stress, it is hypothesized, is what is responsible for disruption in the development of the central nervous system [83,84]. Regardless of the precipitating factor, however, evidence clearly reveals poverty-related disruptions in central nervous system development [51,52,89]. In light of these demonstrated relationships between poverty and structure of the central nervous system in general, it is reasonable that evidence would have been found in the current study for delays in the development of the central auditory pathways in particular.
Conclusions
Acquiring human language is (perhaps arguably) the greatest accomplishment of childhood. Although it can appear effortless for children developing in typical fashion, it is in fact a difficult feat requiring that development of many skills proceed “on time” and in a synchronous manner. The purpose of the study reported here was to test two hypotheses related to language acquisition and delay. Hypothesis 1 was that delays in the development of the central auditory pathways can constrain language acquisition, especially of phonological sensitivity as compared to lexicosyntactic knowledge. Hypothesis 2 was that conditions previously demonstrated to hinder language acquisition take their toll in part or wholly through the constraints they impose on the development of the central auditory pathways. Measures of auditory function and language abilities were collected from children spanning the socioeconomic spectrum and varying in gestational age. Results from these children largely supported both hypotheses.
Acknowledgments
This work was supported by Grant R01 DC020918 from the National Institutes of Health, National Institute on Deafness and Other Communication Disorders to Susan Nittrouer. We are grateful to Donal Sinex and Vashist Hegde for the Matlab programs used in this work. We thank the Gainesville Housing Authority for help with recruitment and we thank the numerous University of Florida students who showed great dedication in helping us collect and score the data.
References
- 1. Olsho LW, Koch EG, Carter EA, Halpin CF, Spetner NB. Pure-tone sensitivity of human infants. J Acoust Soc Am. 1988;84(4):1316–24. pmid:3198866
- 2. Abdala C, Folsom RC. The development of frequency resolution in humans as revealed by the auditory brain-stem response recorded with notched-noise masking. J Acoust Soc Am. 1995;98(2 Pt 1):921–30. pmid:7642831
- 3. Spetner NB, Olsho LW. Auditory frequency resolution in human infancy. Child Dev. 1990;61(3):632–52. pmid:2364740
- 4. Walker BA, Gerhards CM, Werner LA, Horn DL. Amplitude modulation detection and temporal modulation cutoff frequency in normal hearing infants. J Acoust Soc Am. 2019;145(6):3667. pmid:31255105
- 5. Werner LA, Folsom RC, Mancl LR, Syapin CL. Human auditory brainstem response to temporal gaps in noise. J Speech Lang Hear Res. 2001;44(4):737–50. pmid:11521768
- 6. Werner LA. Issues in human auditory development. J Commun Disord. 2007;40(4):275–83. pmid:17420028
- 7. Hall JW 3rd, Grose JH. Development of temporal resolution in children as measured by the temporal modulation transfer function. J Acoust Soc Am. 1994;96(1):150–4. pmid:7598757
- 8. Nittrouer S, Lowenstein JH. Early otitis media puts children at risk for later auditory and language deficits. Int J Pediatr Otorhinolaryngol. 2024;176:111801. pmid:38048734
- 9. Cabrera L, Lau BK. The development of auditory temporal processing during the first year of life. Hearing Balance Commun. 2022;20(3):155–65. pmid:36111124
- 10. Schneider BA, Trehub SE, Morrongiello BA, Thorpe LA. Developmental changes in masked thresholds. J Acoust Soc Am. 1989;86(5):1733–42. pmid:2808922
- 11. Allen P, Wightman F. Spectral pattern discrimination by children. J Speech Hear Res. 1992;35(1):222–33. pmid:1735972
- 12. Horn DL, Dudley DJ, Dedhia K, Nie K, Drennan WR, Won JH, et al. Effects of age and hearing mechanism on spectral resolution in normal hearing and cochlear-implanted listeners. J Acoust Soc Am. 2017;141(1):613. pmid:28147578
- 13. Jahn KN, Arenberg JG, Horn DL. Spectral resolution development in children with normal hearing and with cochlear implants: a review of behavioral studies. J Speech Lang Hear Res. 2022;65(4):1646–58.
- 14. Peter V, Wong K, Narne VK, Sharma M, Purdy SC, McMahon C. Assessing spectral and temporal processing in children and adults using temporal modulation transfer function (TMTF), Iterated Ripple Noise (IRN) perception, and spectral ripple discrimination (SRD). J Am Acad Audiol. 2014;25(2):210–8. pmid:24828221
- 15. Bertoncini J, Bijeljac-Babic R, Jusczyk PW, Kennedy LJ, Mehler J. An investigation of young infants’ perceptual representations of speech sounds. J Exp Psychol Gen. 1988;117(1):21–33. pmid:2966228
- 16. Jusczyk PW. From general to language-specific capacities: the WRAPSA Model of how speech perception develops. Journal of Phonetics. 1993;21(1–2):3–28.
- 17.
Metsala JL, Walley AC. Spoken vocabulary growth and the segmental restructuring of lexical representations: precursors to phonemic awareness and early reading ability. Word recognition in beginning literacy. Mahwah, NJ: Lawrence Erlbaum Associates Publishers. 1998. 89–120.
- 18.
Peters AM. The units of language acquisition. New York: Cambridge University Press. 1983.
- 19. Waterson N. Child phonology: a prosodic view. J Ling. 1971;7(2):179–211.
- 20. Nittrouer S. Children hear the forest. J Acoust Soc Am. 2006;120(4):1799–802. pmid:17069277
- 21. Mattys SL, Jusczyk PW, Luce PA, Morgan JL. Phonotactic and prosodic effects on word segmentation in infants. Cogn Psychol. 1999;38(4):465–94. pmid:10334878
- 22. Vihman MM, Velleman SL. Phonological Reorganization: A Case Study. Lang Speech. 1989;32(2):149–70.
- 23.
Menn L. Development of articulatory, phonetic and phonological capabilities. Language production: development, writing and other language processes. New York: Academic Press. 1983. 3–50.
- 24. Charles-Luce J, Luce PA. Similarity neighbourhoods of words in young children’s lexicons. J Child Lang. 1990;17(1):205–15. pmid:2312642
- 25. Ainsworth S, Welbourne S, Hesketh A. Lexical restructuring in preliterate children: evidence from novel measures of phonological representation. Appl Psycholinguist. 2016;37:997–1023.
- 26. Garlock VM, Walley AC, Metsala JL. Age-of-Acquisition, Word Frequency, and Neighborhood Density Effects on Spoken Word Recognition by Children and Adults⋆. Journal of Memory and Language. 2001;45(3):468–92.
- 27. Metsala JL. An examination of word frequency and neighborhood density in the development of spoken-word recognition. Mem Cognit. 1997;25(1):47–56. pmid:9046869
- 28. Nittrouer S, Studdert-Kennedy M, McGowan RS. The emergence of phonetic segments: evidence from the spectral structure of fricative-vowel syllables spoken by children and adults. J Speech Hear Res. 1989;32(1):120–32. pmid:2704187
- 29. Storkel HL. Restructuring of similarity neighbourhoods in the developing mental lexicon. J Child Lang. 2002;29(2):251–74. pmid:12109371
- 30. Walley AC, Metsala JL, Garlock VM. Spoken vocabulary growth: Its role in the development of phoneme awareness and early reading ability. Reading and Writing. 2003;16(1–2):5–20.
- 31. Anthony JL, Lonigan CJ, Driscoll K, Phillips BM, Burgess SR. Phonological sensitivity: A quasi‐parallel progression of word structure units and cognitive operations. Reading Research Quarterly. 2003;38(4):470–87.
- 32.
Fowler AE. How early phonological development might set the stage for phoneme awareness. Phonological processes in literacy. Hillsdale, NJ: Lawrence Erlbaum; 1991. 97–117.
- 33. Liberman IY, Shankweiler D, Fischer FW, Carter B. Explicit syllable and phoneme segmentation in the young child. Journal of Experimental Child Psychology. 1974;18(2):201–12.
- 34. Walley AC. The Role of Vocabulary Development in Children′s Spoken Word Recognition and Segmentation Ability. Developmental Review. 1993;13(3):286–350.
- 35. Nittrouer S, Miller ME. Predicting developmental shifts in perceptual weighting schemes. J Acoust Soc Am. 1997;101(4):2253–66. pmid:9104027
- 36. Baumann JF, Graves MF. What is academic vocabulary?. Journal of Adolescent & Adult Literacy. 2010;54(1):4–12.
- 37.
Schleppegrell MJ. The language of schooling: a functional linguistics perspective. Mahwah, N.J.: Lawrence Erlbaum Associates, Inc. 2004.
- 38.
Bernstein B. Class, codes and control, volume I: theoretical studies towards a sociology of language. London: Routledge and Kegan Paul, Ltd. 1971.
- 39. Farah MJ, Shera DM, Savage JH, Betancourt L, Giannetta JM, Brodsky NL, et al. Childhood poverty: specific associations with neurocognitive development. Brain Res. 2006;1110(1):166–74. pmid:16879809
- 40. Oller DK, Eilers RE, Steffens ML, Lynch MP, Urbano R. Speech-like vocalizations in infancy: an evaluation of potential risk factors. J Child Lang. 1994;21(1):33–58. pmid:8006094
- 41. Perkins SC, Finegood ED, Swain JE. Poverty and language development: roles of parenting and stress. Innov Clin Neurosci. 2013;10(4):10–9. pmid:23696954
- 42.
Tough J. The development of meaning. New York, NY: Halstead Press. 1977.
- 43. Wild KT, Betancourt LM, Brodsky NL, Hurt H. The effect of socioeconomic status on the language outcome of preterm infants at toddler age. Early Hum Dev. 2013;89(9):743–6. pmid:23803578
- 44. Hoff E. The specificity of environmental influence: socioeconomic status affects early vocabulary development via maternal speech. Child Dev. 2003;74(5):1368–78. pmid:14552403
- 45. Noble KG, McCandliss BD, Farah MJ. Socioeconomic gradients predict individual differences in neurocognitive abilities. Dev Sci. 2007;10(4):464–80. pmid:17552936
- 46.
Hart B, Risley TR. Meaningful differences in the everyday experience of young American children. Baltimore, MD: Paul H. Brookes Publishing Co.; 1995.
- 47.
Schachter FF. Everyday mother talk to toddlers: early intervention. New York, NY: Academic Press. 1979.
- 48.
Tizard B. Language at home and at school. Washington, D.C.: National Association for the Education of Young Children. 1981.
- 49. Wallace IF, Roberts JE, Lodder DE. Interactions of African American infants and their mothers: relations with development at 1 year of age. J Speech Lang Hear Res. 1998;41(4):900–12. pmid:9712136
- 50. Fernald A, Marchman VA, Weisleder A. SES differences in language processing skill and vocabulary are evident at 18 months. Dev Sci. 2013;16(2):234–48. pmid:23432833
- 51. Blair C, Raver CC. Poverty, Stress, and Brain Development: New Directions for Prevention and Intervention. Acad Pediatr. 2016;16(3 Suppl):S30-6. pmid:27044699
- 52. Hair NL, Hanson JL, Wolfe BL, Pollak SD. Association of Child Poverty, Brain Development, and Academic Achievement. JAMA Pediatr. 2015;169(9):822–9. pmid:26192216
- 53. Noble KG, Norman MF, Farah MJ. Neurocognitive correlates of socioeconomic status in kindergarten children. Dev Sci. 2005;8(1):74–87. pmid:15647068
- 54. Raizada RD, Richards TL, Meltzoff A, Kuhl PK. Socioeconomic status predicts hemispheric specialisation of the left inferior frontal gyrus in young children. Neuroimage. 2008;40(3):1392–401.
- 55. Barre N, Morgan A, Doyle LW, Anderson PJ. Language abilities in children who were very preterm and/or very low birth weight: a meta-analysis. J Pediatr. 2011;158(5):766-774.e1. pmid:21146182
- 56. Vandormael C, Schoenhals L, Hüppi PS, Filippa M, Borradori Tolsa C. Language in Preterm Born Children: Atypical Development and Effects of Early Interventions on Neuroplasticity. Neural Plast. 2019;2019:6873270. pmid:30930944
- 57. Vohr B. Speech and language outcomes of very preterm infants. Semin Fetal Neonatal Med. 2014;19(2):78–83. pmid:24275068
- 58. Duncan N, Schneider P, Robertson CMT. Language abilities in five- through seven-year-old children born at or under 28 weeks gestational age. Journal of Medical Speech-Language Pathology. 1996;4:71–9.
- 59. van Noort-van der Spek IL, Franken M-CJP, Weisglas-Kuperus N. Language functions in preterm-born children: a systematic review and meta-analysis. Pediatrics. 2012;129(4):745–54. pmid:22430458
- 60. Savenko IV, Garbaruk ES, Krasovskaya EA. Changes in auditory function in premature children: A prospective cohort study. Int J Pediatr Otorhinolaryngol. 2020;139:110456. pmid:33096380
- 61. Durante AS, Mariano S, Pachi PR. Auditory processing abilities in prematurely born children. Early Hum Dev. 2018;120:26–30. pmid:29605746
- 62. Amin SB, Orlando M, Monczynski C, Tillery K. Central auditory processing disorder profile in premature and term infants. American Journal of Perinatology. 2015;32(4):399–404.
- 63. Blackburn S. Environmental impact of the NICU on developmental outcomes. J Pediatr Nurs. 1998;13(5):279–89. pmid:9798363
- 64. Monson BB, Rock J, Cull M, Soloveychik V. Neonatal intensive care unit incubators reduce language and noise levels more than the womb. J Perinatol. 2020;40(4):600–6. pmid:32020037
- 65. Van Der Merwe J, Van D V, Ferraris S, Gsell W, Himmelreich U, Toelen J. Early neuropathological and neurobehavioral consequences of preterm birth in a rabbit model. Sci Rep. 2019;9(1):3506.
- 66. McGregor KK. How We Fail Children With Developmental Language Disorder. Lang Speech Hear Serv Sch. 2020;51(4):981–92. pmid:32755505
- 67.
Roid GH, Miller LJ. Leiter International Performance Scale – Revised (Leiter-R). Wood Dale, IL: Stoelting Co. 2002.
- 68. Nittrouer S, Burton LT. The role of early language experience in the development of speech perception and phonological processing abilities: evidence from 5-year-olds with histories of otitis media with effusion and low socioeconomic status. J Commun Disord. 2005;38(1):29–63. pmid:15475013
- 69.
Hollingshead AB. The two factor index of social position. New Haven, CT: Hollingshead. 1957.
- 70. Byrne D, Dillon H, Tran K, Arlinger S, Wilbraham K, Cox R, et al. An international comparison of long-term average speech spectra. The Journal of the Acoustical Society of America. 1994;96(4):2108–20.
- 71.
Martin N, Brownell R. Expressive One-Word Picture Vocabulary Test (EOWPVT-4). Novato, CA: Academic Therapy Publications, Inc. 2011.
- 72.
Zimmerman IL, Steiner VG, Pond RE. Preschool Language Scale (PLS-5). 5th ed. San Antonio, TX: Pearson. 2011.
- 73. Stanovich KE, Cunningham AE, Cramer BB. Assessing phonological awareness in kindergarten children: Issues of task comparability. Journal of Experimental Child Psychology. 1984;38(2):175–90.
- 74. Treiman R, Zukowski A. Children’s sensitivity to syllables, onsets, rimes, and phonemes. J Exp Child Psychol. 1996;62(3):193–215. pmid:8691117
- 75. Bowey JA. Socioeconomic status differences in preschool phonological sensitivity and first-grade reading achievement. Journal of Educational Psychology. 1995;87(3):476–87.
- 76. Farmer ME, Klein RM. The evidence for a temporal processing deficit linked to dyslexia: A review. Psychon Bull Rev. 1995;2(4):460–93. pmid:24203785
- 77. Goswami U, Thomson J, Richardson U, Stainthorp R, Hughes D, Rosen S, et al. Amplitude envelope onsets and developmental dyslexia: A new hypothesis. Proc Natl Acad Sci U S A. 2002;99(16):10911–6. pmid:12142463
- 78. Lorenzi C, Dumont A, Füllgrabe C. Use of temporal envelope cues by children with developmental dyslexia. J Speech Lang Hear Res. 2000;43(6):1367–79. pmid:11193958
- 79. Merzenich MM, Jenkins WM, Johnston P, Schreiner C, Miller SL, Tallal P. Temporal processing deficits of language-learning impaired children ameliorated by training. Science. 1996;271(5245):77–81.
- 80. Tomblin JB, Barker BA, Hubbs S. Developmental constraints on language development in children with cochlear implants. Int J Audiol. 2007;46(9):512–23. pmid:17828667
- 81. Newport EL. Maturational Constraints on Language Learning. Cognitive Science. 1990;14(1):11–28.
- 82. Newport EL, Bavelier D, Neville HJ. Critical Thinking about Critical Periods: Perspectives on a Critical Period for Language Acquisition. Language, Brain, and Cognitive Development. The MIT Press. 2002. 481–502.
- 83.
Nittrouer S. Integrated language intervention for children with hearing loss. Pediatric cochlear implantation: learning and the brain. Cham: Springer Nature Switzerland. 2024. 411–24.
- 84. Kuhl PK. Brain mechanisms in early language acquisition. Neuron. 2010;67(5):713–27.
- 85. Werker JF, Tees RC. Speech perception as a window for understanding plasticity and commitment in language systems of the brain. Dev Psychobiol. 2005;46(3):233–51. pmid:15772961
- 86. Pan L, Fang X, Chen H, Zhai F, Chen J. Risk factors for hearing loss in neonates admitted to neonatal intensive care units. J Laryngol Otol. 2022;136(12):1196–202. pmid:36210183
- 87. Hoff E, Tian C. Socioeconomic status and cultural influences on language. J Commun Disord. 2005;38(4):271–8. pmid:15862810
- 88. Justice LM, Singletary B, Jiang H, Schmeer KK. Profiles of Family Stressors Among Low-Income Families with Young Children. Matern Child Health J. 2025;29(4):483–93. pmid:40000565
- 89. Raizada RDS, Kishiyama MM. Effects of socioeconomic status on brain development, and how cognitive neuroscience may contribute to levelling the playing field. Front Hum Neurosci. 2010;4:3. pmid:20161995