For those with mild variations of hearing loss, one of the toughest and most frustrating obstacles to overcome is hearing the person you are conversing with amidst extensive background noise. Whether you are out to eat at a crowded restaurant, taking in a ballgame at the stadium or trying to discuss work with a colleague on the train, sometimes there can be no escaping the clutches of excess ambient sounds that can disrupt comprehending speech. However, this infuriating problem might become obsolete in the near future, according to new audiological studies.
Researchers from Ohio State University have created computer algorithms that could potentially be used in hearing aids to help users analyze speech and separate background noise from your ears. For more than 50 years, hearing technology has always been flawed with an inability to fully mask excess sounds that a user is not primarily focusing on. With an estimated 700 million people plagued with hearing loss worldwide, the researchers’ scientific discoveries has hearing aid users all around the globe listening.
For their initial tests, the professors used hearing impaired volunteers to test whether these computer algorithms had any impact on eliminating background noise. The participants first removed their hearing aids, then played recordings of speech shrouded with background noise over headphones. The subjects were then asked to repeat what they heard, then re-performed the same test, except this time, the recordings were processed with the algorithm to decrease the excess noise. The background sounds were compared to that of a loud air conditioner humming while sentences and phrases were uttered through the headphones.
The researchers found that when implementing the algorithm, the volunteers’ comprehension of the words improved from anywhere between 25 to 85 percent on average. Furthermore, students without hearing loss were tested as well, and found to have lower comprehension scores without the aid of the algorithm than auditory impaired volunteers who used the processing.
The algorithm is said to utilize a technique referred to as “machine learning,” where the professors used a special type of deep neural network to process the separation of speech and excess noises. The new hope for the researchers is to improve the algorithms ability to process speech in “real time,” as well as potentially installing this technology within smartphones, which would be able to run the algorithm and transmit sounds instantly and wirelessly at the same time.
DeLiang “Leon” Wang, a professor at Ohio State University and main contributor to the study, developed the computer algorithm in an attempt to provide a different strategy in regards to auditory technology, as well as helping prove the condition of someone very close to him.
“For 50 years, researchers have tried to pull out the speech from the background noise. That hasn’t worked, so we decided to try a very different approach: classify the noisy speech and retain only the parts where speech dominates the noise,” Wang said in a statement. “(My mom) has been one of my primary motivations. She’s tried all sorts of hearing aids, and none of them works for this problem.”
Looking toward the future
Because the researchers have taken a new procedure to tackle this auditory complication, the technology is currently in the stages of patent pending, and is currently being commercialized for license from Ohio State’s Technology Commercialization and Knowledge Transfer Office. In addition, a $1.8 million grant from the National Institutes of Health has also been provided for the professors to continue their research, and expand the algorithm testing on human volunteers. Only time will tell if they will be successful on revolutionizing the auditory world.