Even when they are asleep, the sounds babies hear play a big role in language development, especially for babies at risk of language delays, according to a new study.
Although it’s well-known that music and speech boost babies’ ability to learn, there’s robust evidence that the developing brain analyzes certain brief auditory cues in an infant’s environments and uses them to guide the formation of networks involved in language processing.
Researcher April Benasich, an expert in early brain plasticity who studies infant language and cognitive development, demonstrated that when infants were passively exposed to a series of brief non-speech sounds once a week for six weeks they were able to more accurately identify and discriminate syllables and had better language scores at 12 and 18 months compared to infants who had not received that exposure.
The study, published in the journal Cerebral Cortex, is important because it’s the first to show that passive exposure to non-speech sounds—which contains tiny acoustic transitions in the 10s of milliseconds, similar to those that allow babies to detect that language is present—facilitate the formation and strengthening of neuronal connections that are essential to language processing.
Previous research in Benasich’s lab showed that interactive exposure to certain auditory cues had a significant impact on critical brain networks and improved both attention and infant language outcomes over time.
But the jury was still out on whether passively exposing infants to these same types of sounds would have an effect on language networks. In fact, there were impressive impacts on both language processing and later language outcomes.
Results suggest that supporting rapid auditory processing abilities early in development, even with only passive exposure, can positively influence later language.
“The ability to impact developing language networks passively is a very important step forward. The passive route provides a simpler, cheaper alternative to promote optimal networks, allowing parents the opportunity to support typical development at home as well as offering a path to an accessible intervention in the clinic or pediatrician’s office for infants for developmental language disorders,” says Benasich, a professor of neuroscience at Rutgers-Newark’s Center for Molecular and Behavioral Neuroscience and the nation’s first endowed chair in Developmental Cognitive Neuroscience.
Her previous research found that measures of rapid auditory processing ability can be used to identify infants at highest risk of language delay and impairment, providing an opportunity to intervene and mitigate outcomes.
“Babies need the small sound transitions that brains must analyze to develop language,” she says. “Their brains are hard wired to analyze any pertinent environmental sounds coming in. If those sounds are all the same frequency, all at the same intensity, the brain might stop listening for these important variations which could impede the creation of language networks.”
Source: Rutgers University