Noise-Selecting AI Headphones Lets You Choose Who, What to Hear

The potential future of noise-canceling headphones.

ChatGPT artificial intelligence will potentially drive the future of noise-canceling headphones, as Interesting Engineering reports that a group of academics at the University of Washington have developed a new type of deep learning algorithm, allowing users to select who and what to hear in real-time whether it be from a crying infant to an ambulance siren.

The team reportedly calls the newest technology "semantic hearing," wherein the headphones will eliminate all background noise by streaming recorded audio to a linked smartphone. Using voice commands or a smartphone app, headphone users may choose what the headphones will only play from 20 types of sounds, including sirens, infant cries, speech, vacuum cleaners, and bird chirps.

Google Transforms Ordinary ANC Headphones Into Heart Rate Monitors
Google researchers have turned active noise canceling (ANC) headphones into heart rate monitors. Foundry Co from Pixabay

The researchers stated that with the newly developed technology, the system could isolate sirens, bird chirps, alarms, and other target sounds while eliminating all other background noise in settings including workplaces, streets, and parks. Regarding the system's audio output for the desired sound, 22 participants gave it an average rating higher than the original recording.

ChatGPT AI Application for the Headphones

The published study entitled "Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables," employed ChatGPT API to the "semantic hearing" system by studying if it could be used "to convert natural language" such as the user stating that they want to hear ambulance sounds "into known sound class inputs for the system," such as labeling ambulance says under "siren."

The results, however, showed that ChatGPT was either confused, as observed in an instance when a loud toilet was mapped as a toilet flush, while certain noises that are not in the given types to filter out are translated by ChatGPT to a sound that is similar to the detectable kinds of sound to block out such as wind and fountain sounds being labeled under the ocean.

The Challenge in Sound Selecting Headphones

Senior author Shyam Gollakota, a professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington, stated that real-time intelligence is needed to distinguish birds from other sounds in an environment. This is something that current noise-cancelling headphones have not been able to accomplish. The difficulty lies in that sounds heard through headphones must correspond with what the users see.

The professor stated that time is also of the essence to what the users see, explaining that if someone speaks to a user, there cannot be a two-second delay, implying that noises must be processed by the neural algorithms in less than a tenth of a second.

Due to this time constraint, Interesting Engineering reports that the semantic hearing system must process noises on a device like a linked smartphone. Furthermore, for humans to continue to effectively experience sounds in their surroundings, the system has to maintain these delays and other spatial signals since sounds coming from various directions enter people's ears at different times.

As per the University's news release, the study presented its findings last November 1, and the team, looking forward, plans to release a commercial version of the technology.

ChatGPT Privacy Guide: Here Are Some Tips to Protect Your Data in OpenAI's Chatbot
Here are some tricks that you can do to have more privacy when using OpenAI's ChatGPT. Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics