For those with hearing impairments, trying to follow a conversation within a noisy environment can be very difficult, such as at a party, at work, in a restaurant and so on. A hearing device is intended to accentuate sound; the brain of the person wearing it then has to struggle to filter out unwanted sound. With a new method, the researchers have succeeded in bringing a cognitive hearing aid that can filter out unwanted noise, much closer to being commercially available.
There are some hearing aids that can ably suppress background noise. However, these devices are unable to assist with wearer in focusing on a single conversation. This will be set to change with the development of a cognitive hearing aid, from Columbia Engineering in New York, that can continually monitor the brain activity of the subject. The artificial intelligence can determine if the subject is conversing with a specific speaker in the environment.
The basis of the device is an end-to-end system that is composed of a single audio channel that contains a mixture of speakers along with the listener’s neural signals. The device can automatically separate the individual speakers and determines which speaker is being listened to by the wearer. It is possible to decode the attended target of a listener using neural responses in the listener’s brain using non-invasive neural recordings. From this basis, the device then amplifies the attended speaker’s voice to assist the listener. This happens in less than ten seconds. The speaker who produces the maximum similarity with the neural data is determined by the artificial intelligence to be the target and is subsequently amplified.
This remarkable device was developed using deep neural network models to come up with a sophisticated method of auditory attention decoding. At present the development is a proof-of-concept study; the long-term aim is to produce a fully-working cognitively controlled hearing aid. According to lead researcher Professor Nima Mesgarani, in a communication sent to Digitial Journal: “This work combines the state-of-the-art from two disciplines: speech engineering and auditory attention decoding. We were able to develop this system once we made the breakthrough in using deep neural network models to separate speech.”
The research has been published in the Journal of Neural Engineering, in a paper titled “Neural decoding of attentional selection in multi-speaker environments without access to clean sources.”