How can babies learn about the sounds in their area

Babies can differentiate most sounds soon after birth, and by age 1 they are listeners by language. But researchers are still trying to understand how babies recognize the phonemic dimensions of their language disparateA term in linguistics that describes the differences between speech sounds that can change the meanings of words. For example, in English, [b] And the [d] Contradictory, because change [b] In the “ball” to a [d] It makes it a different word, “dummy.”

a last paper in Proceedings of the National Academy of Sciences (PNAS) by two computational linguists affiliated with the University of Maryland offers new insight on the topic, which is essential for a better understanding of how children learn what the sounds of their mother tongue are.

Their research shows that an infant’s ability to interpret phonemic differences as either paradoxical or non-contrasting may come from the contexts in which the different sounds occur.

For a long time, researchers believed that there would be distinct differences between the way dissimilar sounds, such as short and long vowels, are pronounced in Japanese. However, although the pronunciation of these two sounds differ in exact speech, the phonemes are often more ambiguous in natural environments.

“This is one of the first phonological learning computations proven to work on automatic data, indicating that children can learn phonological contrasting dimensions after all,” he says. Kasia Hitchenkolead author of the paper.

Hitczenko graduated from the University of Maryland in 2019 with a Ph.D. in Linguistics. She is currently a postdoctoral researcher in the Laboratory of Cognitive Sciences and Psycholinguistics at the Ecole Normale Supérieure in Paris.

Hitczenko’s work shows that children can distinguish between phonemic sounds based on context clues, such as adjacent sounds. Her team tested their theory in two case studies with two different definitions of context, by comparing data on Japanese, Dutch and French.

The researchers combined speech that occurred in different contexts and developed charts summarizing the durations of vowels in each context. In Japanese, they find that the plots of these vowel duration are distinctly different in different contexts, because some contexts have more short vowels, while others contain longer vowels. In French, the plots for this vowel duration were similar in all contexts.

“We believe this work provides a compelling account of how children learn speech inconsistencies in their language, and shows that the necessary signal is present in natural speech, enhancing our understanding of early language learning,” says the co-author. Naomi FeldmanAssociate Professor of Linguistics with an appointment at University of Maryland Institute for Advanced Computer Studies (UMIACS).

Feldman adds that the sign they studied is true across most languages, and it is possible that their result can be generalized to other discrepancies.

The Recently published research is an extension Hitczenko’s Ph.D. hypothesiswhich examined how context is used for phonemic learning and the cognition of natural speech.

Feldman was the academic advisor to Hitczenko in Maryland, where they both completed much of their research in Computational Linguistics and Information Processing Labwhich is supported by UMIACS.


Natural speech supports distributional learning across contextsPosted on September 13, 2022, in Proceedings of the National Academy of Sciences.

This work was supported in part by the National Science Foundation, including a $520,000 award for “Modeling the evolution of vocal representationsand a $240,000 prize forCognitive models for the acquisition of vowels in context. “

Disclaimer: AAAS and EurekAlert! is not responsible for the accuracy of newsletters sent on EurekAlert! Through the contributing institutions or for the use of any information through the EurekAlert system.