Published on November 16, 2023, 6:56 pm

In January, Google made headlines with its AI-based music creation software that generated tunes based on word prompts. Now, its sister company Google DeepMind is taking it a step further by introducing Lyria, a new music generation model that works in collaboration with YouTube. Alongside Lyria, two experimental toolsets have been released: Dream Track and Music AI.

Dream Track enables creators to make music specifically for YouTube Shorts. On the other hand, the Music AI tools aim to assist the creative process by helping artists build melodies from snippets they hum. Additionally, DeepMind is adapting SynthID, which is used to mark AI images, to watermark AI-generated music as well.

These releases come at a time when AI in the creative arts continues to spark controversy. Recently, it was a significant topic during the Screen Actors Guild strike and in the music industry as well. While Ghostwriter utilized AI to imitate popular artists like Drake and The Weeknd, many wonder if AI creation will become more prevalent in the future.

With today’s announcement of these new tools, DeepMind and YouTube’s main focus appears to be creating technology that ensures AI-generated music remains credible both in complementing current creators and sounding aesthetically pleasing.

DeepMind acknowledges that one of the challenges with generating long sequences of sound is maintaining musical continuity across phrases and verses due to the complexity involved. As a result, some of the initial applications of Lyria are geared towards shorter pieces.

Dream Track is being tested by a select group of creators who have access to building 30-second AI-generated soundtracks in the style of various artists such as Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan, and Papoose. Creators can choose an artist and provide lyrics or backing tracks for Lyria to create a 30-second piece intended for use with Shorts.

It is worth noting that the involvement of these artists in the project extends beyond testing the models; they have provided input and feedback as well. The Music AI tools, developed through the company’s Music AI Incubator program, will be released later this year and cover various areas such as creating music for specific instruments or ensembles based on humming a tune, using chords from a MIDI keyboard to form a choir, and generating instrumental tracks for existing vocal lines.

Google and DeepMind are not alone in exploring AI in music. Other companies like Meta and Stability AI have also developed AI music generators, while startups such as Riffusion are raising funds to further their efforts in this field. As the music industry prepares for these advancements, it remains to be seen how AI creation will shape the future of music composition and production.


Comments are closed.