Brain2Music: AI Model Can Reconstruct Music From Human Brain Waves

This AI method can reconstruct music from human brain activity.

In an exciting leap at the intersection of neuroscience and artificial intelligence (AI), researchers at Google and Osaka University reported achieving something extraordinary: the ability to translate human brain activity into music.

Does this mean that we could be composing our thoughts directly into a song in the future?

Brain2Music: AI Model Can Reconstruct Music From Human Brain Waves
Researchers have reported achieving the extraordinary ability of an AI model to translate thoughts into music. INA FASSBENDER/AFP via Getty Images

AI Model Brain2Music: From Thoughts to Music

Dubbed "Brain2Music," Science X Network reported that this cutting-edge AI model has the power to convert thoughts and brainwaves to reproduce music.

To accomplish this feat, the researchers played music samples covering 10 different genres, including rock, classical, metal, hip-hop, pop, and jazz, for five subjects while monitoring their brain activity using Functional MRI (fMRI) readings.

Unlike standard MRI readings that capture static images, fMRI records metabolic activity over time, providing crucial insights into brain functions.

These fMRI readings were then utilized to train a deep neural network that identified specific brain activities associated with various characteristics of music, such as genre, mood, and instrumentation.

Additionally, the researchers integrated the MusicLM model developed by Google into their study. MusicLM generates music based on text descriptions, incorporating factors such as instrumentation, rhythm, and emotions.

Combining the MusicLM database with the fMRI readings, the AI model, named Brain2Music, reconstructed the music the subjects had heard. Instead of using text instructions, the AI leveraged brain activity to provide context for the musical output.

Original Music Stimulus

According to Timo Denk, one of the paper's authors and a researcher at Google, their "evaluation indicates that the reconstructed music semantically resembles the original music stimulus."

The AI-generated music closely resembled the original samples' genres, instrumentation, and mood. The researchers also identified specific brain regions that reflected information originating from text descriptions of music.

The team's shared examples revealed strikingly similar music excerpts interpreted by Brain2Music based on the subjects' brainwaves. Notably, the model reconstructed segments of Britney Spears' hit song "Oops!... I Did It Again," capturing the essence of instruments and beats with precision, although the lyrics were incomprehensible.

The potential applications of Brain2Music are vast and intriguing. As AI technology continues to advance, it could potentially revolutionize music creation, enabling composers to simply imagine melodies while a printer, wirelessly connected to the auditory cortex, automatically produces sheet music.

The findings of the research team were published in arXiv.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics