New App Enables Interactive Music Performance Through Voice and Gestures

This app allows users to customize compositions using their voice, facial expressions, or gestures.

A team of researchers has unveiled an app that aims to reshape musical experiences. This app offers a fresh perspective on musical interaction, giving users unprecedented control over tempo, dynamics, and style.

According to TechXplore, this technology may open up a world of possibilities for musicians and enthusiasts alike by harnessing the power of voice and gestures.

New App Enables Interactive Music Performance Through Voice and Gestures
A team of researchers has unveiled an app that offers a fresh perspective on musical interaction, giving users unprecedented control over tempo, dynamics, and style. whoalice-moore from Pixabay

Immersive Music App

Ilya Borovik, a PhD student specializing in computational and data science and engineering, with his co-author from Germany, has developed an innovative app designed to make music accessible to individuals, regardless of their musical background or physical capabilities.

The app, detailed in a chapter of the eBook "Augmenting Human Intellect," introduces a unique approach to tailoring music experiences.

It allows users to customize compositions using their voice, facial expressions, or gestures, enabling adjustments such as altering the tempo or rendering a piece in a soothing lullaby style.

The demo version of the system comprises an AI model trained on an open corpus of renderings for various piano compositions. This model processes notated music and learns how to play it while predicting performance characteristics such as tempo, position, duration, and note loudness.

The user has control over the model through the incorporated app, enabling interaction between the user and the model. Upon launching the app, users grant access to their smartphone's camera and microphone, initiating playback of a randomly generated rendition from the app's database.

To modify the rendition, users can commence a video or audio recording. The model can also be instructed to perform the music differently through voice commands or facial expressions.

The app utilizes performance directions already included in the musical notation to interact with the model. These directions guide the player on how to perform the music, indicating tempo changes, dynamics, and more.

The app translates the user's voice commands into these performance directions, resulting in a unique rendition of the composition.

AI-Powered Demo Version

While the project is still evolving, the research team plans to enhance the user-model communication, aiming to expedite the process for users to achieve their desired results.

The app's interface will undergo improvements, and the composition database will be expanded. In subsequent stages, the researchers aim to incorporate orchestral music into the app's repertoire.

"The demo version of our system comprises an AI model, which has been trained using an open corpus of 1,067 renderings provided for 236 compositions of piano music. The model takes notated music as input and learns how to play it while predicting performance characteristics: local tempo, position, duration, and note loudness," Borovik said in a statement.

"The output is a rendering of the composition. We aimed to provide control over the model to the user, so we incorporated it into the app, which enables interaction between the model and the end user," he added.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics