Connect with us

Hi, what are you looking for?

Life

New cognitive hearing aid filters out noise

For those with hearing impairments, trying to follow a conversation within a noisy environment can be very difficult, such as at a party, at work, in a restaurant and so on. A hearing device is intended to accentuate sound; the brain of the person wearing it then has to struggle to filter out unwanted sound. With a new method, the researchers have succeeded in bringing a cognitive hearing aid that can filter out unwanted noise, much closer to being commercially available.

There are some hearing aids that can ably suppress background noise. However, these devices are unable to assist with wearer in focusing on a single conversation. This will be set to change with the development of a cognitive hearing aid, from Columbia Engineering in New York, that can continually monitor the brain activity of the subject. The artificial intelligence can determine if the subject is conversing with a specific speaker in the environment.

The basis of the device is an end-to-end system that is composed of a single audio channel that contains a mixture of speakers along with the listener’s neural signals. The device can automatically separate the individual speakers and determines which speaker is being listened to by the wearer. It is possible to decode the attended target of a listener using neural responses in the listener’s brain using non-invasive neural recordings. From this basis, the device then amplifies the attended speaker’s voice to assist the listener. This happens in less than ten seconds. The speaker who produces the maximum similarity with the neural data is determined by the artificial intelligence to be the target and is subsequently amplified.

This remarkable device was developed using deep neural network models to come up with a sophisticated method of auditory attention decoding. At present the development is a proof-of-concept study; the long-term aim is to produce a fully-working cognitively controlled hearing aid. According to lead researcher Professor Nima Mesgarani, in a communication sent to Digitial Journal: “This work combines the state-of-the-art from two disciplines: speech engineering and auditory attention decoding. We were able to develop this system once we made the breakthrough in using deep neural network models to separate speech.”

The research has been published in the Journal of Neural Engineering, in a paper titled “Neural decoding of attentional selection in multi-speaker environments without access to clean sources.”

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

US President Joe Biden delivers remarks after signing legislation authorizing aid for Ukraine, Israel and Taiwan at the White House on April 24, 2024...

World

AfD leaders Alice Weidel and Tino Chrupalla face damaging allegations about an EU parliamentarian's aide accused of spying for China - Copyright AFP Odd...

Business

Meta's growth is due in particular to its sophisticated advertising tools and the success of "Reels" - Copyright AFP SEBASTIEN BOZONJulie JAMMOTFacebook-owner Meta on...

Business

The job losses come on the back of a huge debt restructuring deal led by Czech billionaire Daniel Kretinsky - Copyright AFP Antonin UTZFrench...