A new brain-inspired algorithm could help hearing aids tune out interference and isolate single talkers in a crowd of voices.
When a group of friends gets together at a bar or gathers for an intimate dinner, conversations can quickly multiply and mix, with different groups and pairings chatting over and across one another.
Navigating this lively jumble of words—and focusing on the ones that matter—is particularly difficult for people with some form of hearing loss. Bustling conversations can become a fused mess of chatter, even if someone has hearing aids, which often struggle filtering out background noise.
It’s known as the “cocktail party problem”—and Boston University researchers believe they might have a solution.
In testing, researchers found the new algorithm could improve word recognition accuracy by 40 percentage points relative to current hearing aid algorithms.
“We were extremely surprised and excited by the magnitude of the improvement in performance—it’s pretty rare to find such big improvements,” says Kamal Sen, the algorithm’s developer and a BU College of Engineering associate professor of biomedical engineering.
Some estimates put the number of Americans with hearing loss at close to 50 million; by 2050, around 2.5 billion people globally are expected to have some form of hearing loss, according to the World Health Organization.
“The primary complaint of people with hearing loss is that they have trouble communicating in noisy environments,” says coauthor Virginia Best, a BU Sargent College of Health & Rehabilitation Sciences research associate professor of speech, language, and hearing sciences.
“These environments are very common in daily life and they tend to be really important to people—think about dinner table conversations, social gatherings, workplace meetings. So, solutions that can enhance communication in noisy places have the potential for a huge impact.”
As part of the work, the researchers also tested the ability of current hearing aid algorithms to cope with the cacophony of cocktail parties. Many hearing aids already include noise reduction algorithms and directional microphones, or beamformers, designed to emphasize sounds coming from the front.
“We decided to benchmark against the industry standard algorithm that’s currently in hearing aids,” says Sen. That existing algorithm “doesn’t improve performance at all; if anything, it makes it slightly worse. Now we have data showing what’s been known anecdotally from people with hearing aids.”
Sen has patented the new algorithm—known as BOSSA, which stands for biologically oriented sound segregation algorithm—and is hoping to connect with companies interested in licensing the technology. He says that with Apple jumping into the hearing aid market—its latest AirPod Pro 2 headphones are advertised as having a clinical-grade hearing aid function—the BU team’s breakthrough is timely: “If hearing aid companies don’t start innovating fast, they’re going to get wiped out, because Apple and other start-ups are entering the market.”
For the past 20 years, Sen has been studying how the brain encodes and decodes sounds, looking for the circuits involved in managing the cocktail party effect. With researchers in his Natural Sounds & Neural Coding Laboratory, he’s plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain. One key mechanism: inhibitory neurons, brain cells that help suppress certain, unwanted sounds.
“You can think of it as a form of internal noise cancellation,” he says. “If there’s a sound at a particular location, these inhibitory neurons get activated.” According to Sen, different neurons are tuned to different locations and frequencies.
The brain’s approach is the inspiration for the new algorithm, which uses spatial cues like the volume and timing of a sound to tune into or tune out of it, sharpening or muffling a speaker’s words as needed.
“It’s basically a computational model that mimics what the brain does,” says Sen, who’s affiliated with BU’s centers for neurophotonics and for systems neuroscience, “and actually segregates sound sources based on sound input.”
“Ultimately, the only way to know if a benefit will translate to the listener is via behavioral studies,” says Best, an expert on spatial perception and hearing loss, “and that requires scientists and clinicians who understand the target population.”
Formerly a research scientist at Australia’s National Acoustic Laboratories, Best helped design a study using a group of young adults with sensorineural hearing loss, typically caused by genetic factors or childhood diseases. In a lab, participants wore headphones that simulated people talking from different nearby locations. Their ability to pick out select speakers was tested with the aid of the new algorithm, the current standard algorithm, and no algorithm. Boyd helped collect much of the data and was the lead author on the paper.
Reporting their findings, the researchers write that the “biologically inspired algorithm led to robust intelligibility gains under conditions in which a standard beamforming approach failed. The results provide compelling support for the potential benefits of biologically inspired algorithms for assisting individuals with hearing loss in ‘cocktail party’ situations.”
They’re now in the early stages of testing an upgraded version that incorporates eye tracking technology to allow users to better direct their listening attention.
The science powering the algorithm might have implications beyond hearing loss too.
“The [neural] circuits we are studying are much more general purpose and much more fundamental,” says Sen.
“It ultimately has to do with attention, where you want to focus—that’s what the circuit was really built for. In the long term, we’re hoping to take this to other populations, like people with ADHD or autism, who also really struggle when there’s multiple things happening.”
The findings appear in Communications Engineering.
Support for this research came from the National Institutes of Health, the National Science Foundation, and the Demant Foundation.
Source: Boston University