Timing Is Key for Healthy Hearing | MIT News

When sound waves reach your inner ear, nerve cells there pick up the vibrations and alert your brain. The signal encodes a treasure trove of information that allows you to follow a conversation, recognise familiar voices, enjoy music, and quickly find a ringing phone or crying baby.

Nerve cells send signals by firing electrical impulses (brief changes in voltage that travel along nerve fibers, also called action potentials). Amazingly, auditory neurons can fire hundreds of electrical impulses every second, and they can time those impulses with great precision to match the vibrations of incoming sound waves.

Using a powerful new model of human hearing, scientists at MIT’s McGovern Institute for Brain Research have found that this precise timing is crucial for some of the most important ways we make sense of auditory information, including speech recognition and sound source localization.

 The open-access findings, published December 4 in Nature Communications , show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. MIT Professor and McGovern Fellow Josh McDermott, who led the study, explains that his team’s model can help researchers study the effects of different types of hearing loss and come up with more effective interventions. 

Sound Science

Researchers have long suspected that timing is important to sound perception, because auditory signals from the nervous system are so precisely timed. Sound waves vibrate at a rate that determines the pitch of a sound: low sounds travel slowly in waves, while high sounds vibrate more frequently. The auditory nerve carries information from the sound-sensing hair cells in the ear to the brain, which generates electrical impulses that correspond to the frequency of these vibrations. “Action potentials in the auditory nerve fire at very specific times relative to the peaks of the stimulus waveform,” explains McDermott, who is also associate dean of MIT’s Department of Brain and Cognitive Sciences.

This relationship is called phase-locking, and it requires neurons to time their impulses with sub-millisecond precision. But scientists still don’t quite understand what information these timing patterns convey to the brain. McDermott says the question is not only scientifically intriguing, but also has important clinical implications. “If we want to design prosthetic devices that send electrical signals to the brain to replicate the function of the ear, it’s really important to know what information is actually important for a normal ear,” he says.

This is difficult to study experimentally—animal models don’t offer much insight into how the human brain extracts structure from language or music, and the human auditory nerve isn’t a target for study—so McDermott and fellow PhD student Mark Sadler, ’24, turned to artificial neural networks.

Artificial hearing

Neuroscientists have long used computational models to explore how sensory information is decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with previous models was that they were often too good,” says Sadler, now at the Technical University of Denmark. For example, a computational model given the task of identifying the higher tone in a simple pair of sounds could outperform a human asked to do the same thing. “This is not the kind of task we do when listening every day,” Sadler points out. “The brain is not optimized to solve this very artificial challenge.” This mismatch limited the insights gained from previous generations of models.

To better understand the brain, Sadler and McDermott wanted their model of hearing to perform the tasks people use their ears to perform in the real world, such as recognizing words and speech. That meant developing an artificial neural network that simulates the part of the brain that receives input from the ears. The network would be fed input from about 32,000 simulated sound-detecting sensory neurons and optimized for a variety of real-world tasks.

The researchers demonstrated that their model simulates human hearing better than any previous model of auditory behavior, McDermott said. In one test, they asked the neural network to recognize words and voices among dozens of different background noises, from airplane cabin rumble to enthusiastic applause. In every situation, the model performed very similarly to a human.

But when the team shortened the timing of spikes in the simulated ears, their model was no longer able to match human speech recognition and sound localization abilities. For example, McDermott’s team had previously shown that humans use pitch to identify human voices, but the model revealed that this ability is lost without precise timing cues. “To perform the task well, while still taking into account human behavior, you need pretty precise collision timing,” Sadler says. This suggests that the brain uses precisely timed auditory signals to support these practical aspects of hearing.

The team’s findings show how artificial neural networks can help neuroscientists understand how information extracted from the ear influences our perception of the world, both in people with normal hearing and those with hearing loss. “The ability to link patterns of auditory nerve activation to behavior opens up a lot of possibilities,” McDermott says.

“Now that we have a model that connects neural responses in the ear to auditory behavior, it begs the question, ‘How does simulating different types of hearing loss affect hearing?'” McDermott says. “We think this will improve the accuracy of hearing loss diagnosis, as well as help improve hearing aids and cochlear implants.” For example, he says, “Cochlear implants are limited in many ways. There are some things they can do, and some things they can’t. What’s the best way to configure the implant so that you can modify behavior? In principle, we can use our model to tell us that.”

Để lại một bình luận

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *