Technologies
- Published
- Authors
-
Lyria team
Musicians today are drawing inspiration and crafting their sound using a broad ecosystem of tools — from mobile apps to traditional Digital Audio Workstations, specialized plug-ins and hardware. Now, artificial intelligence (AI) is emerging as a powerful new part of this creative toolkit, opening doors to novel workflows and sonic possibilities.
Google has long collaborated with musicians, producers, and artists in the research and development of music AI tools. Ever since launching the Magenta project, in 2016, we’ve been exploring how AI can enhance creativity — sparking inspiration, facilitating exploration and enabling new forms of expression, always hand-in-hand with the music community.
Our ongoing collaborations led to the creation of Music AI Sandbox, in 2023, which we’ve shared with musicians, producers and songwriters through YouTube’s Music AI Incubator.
Building upon the work we’ve done to date, today, we’re introducing new features and improvements to Music AI Sandbox, including Lyria 2, our latest music generation model. We’re giving more musicians, producers and songwriters in the U.S. access to experiment with these tools, and are gathering feedback to inform their development.
We’re excited to see what this growing community creates with Music AI Sandbox and encourage interested musicians, songwriters, and producers to sign up here.
Music AI Sandbox
We created Music AI Sandbox in close collaboration with musicians. Their input guided our development and experiments, resulting in a set of responsibly created tools that are practical, useful and can open doors to new forms of music creation.
The Music AI Sandbox is a set of experimental tools, which can spark new creative possibilities and help artists explore unique musical ideas. Artists can generate fresh instrumental ideas, craft vocal arrangements or simply break through a creative block.
With these tools, musicians can discover new sounds, experiment with different genres, expand and enhance their musical libraries, or develop entirely new styles. They can also push further into unexplored territories — from unique soundscapes to their next creative breakthrough.
Create new musical parts
Quickly try out music ideas by describing what kind of sound you want — the Music AI Sandbox understands genres, moods, vocal styles and instruments. The Create tool helps generate many different music samples to spark the imagination or for use in a track. Artists can also place their own lyrics on a timeline and specify musical characteristics, like tempo and
24 Comments
mvkel
"Releases" is a strong word, as in typical google fashion, the actual thing that was released was a waitlist form.
ifuknowuknow
[flagged]
htrp
Only available in the US
"Country of residence (this current phase of the experiment is only available to users based in the U.S. for now, but feel free to submit interest and stay tuned for updates):
"
modeless
Music models are not interesting to me unless I can use them to edit and remix existing music. Of course none of them let you do that to avoid being sued by the labels.
ipnon
It seems to struggle to create music with a strong identity. It is great if you want to make a poor imitation of top 40 hits. But the thing about top 40 type music is that the best music is already in the Top 40. It remains to be seen if there is as strong a demand for a music chart filled with slop as there is demand for a music chart filled with pop tunes by celebrities.
I don't think audio files are the right output for deep learning music models. It'd be more useful to pro musicians to describe some parameters for synths, or describe a MIDI baseline, or describe tunings for a plugin and then have the model generate these, which can then be tweaked similar to how we now code with LLMs. But generating muddy, poorly mixed WAVs with purple prose lyrics is only an interesting deep learning demo at this point, not an advancement in music itself.
eucryphia
You know what the biggest problem with pushing all-things-Al is? Wrong direction.
I want Al to do my laundry and dishes so that I can do art and writing, not for Al to do my art and writing so that I can do my laundry and dishes.
– Joanna Maciejewska
You could add music
moralestapia
>Waitlist
They still haven't learned, wow.
Someone in there really wants to drive Google to the ground.
fallinditch
I am interested in using AI-driven music composition tools in new ways, and Lyra 2 sounds impressive, but
a) so far, using these tools leaves me feeling a little meh, and
b) we are in between the before and after times right now, witnessing the transition to the world of AI content and we're definitely losing something.
Prompt: Hazy, fractured UK Garage, Bedroom Recording, Distorted and melancholic. Instrumental. A blend of fractured drum patterns, vocal samples that have been manipulated and haunting ambient textures, featuring heavy sub-bass, distorted synths, sparse melodic fragments.
https://www.youtube.com/watch?v=cNog4qB-mHQ&t=5s&pp=2AEFkAIB
ipaddr
The interest in ai music generation is lower than I initially thought. I jumped in but felt the exercise lacked the joy of making music physically or with software like pro tools. With pro tools you control the thousands of knobs which gives you more control. These AI models take away that connection. You can play around with different words to get different results but it's like painting with a shotgun.
No one wants to hear other people's ai songs because they lack meaning and novelty.
AI image and short video generation can create novelty and interest. But when the medium require more from the person like reading a book or watching a movie the level of AI acceptance goes down. We'll accept an AI generated email or ad copy but not an ai generated playlist and certainly not a deepfake of someone from reality. That's what people want from AI, a blending of real life into a fantasy generator but no one is offering that yet.
TheAceOfHearts
I've made a few tracks using Suno to scratch my own itch / desire for music that covers certain themes.
The best use of Suno for has been the ease with which you can generate diss tracks: I ask Gemini to make a diss track lyrics related to specific topics, and then I have Suno generate the actual track. It's very cathartic when you're sitting at home in the dark because the power company continues to fail.
Anyway, I hope I can get access, I think it would be fun to vibe some new music. Although this UI looks severely limited in what capabilities it provides. Why aren't the people who build these tools innovating more? It would be cool if you could generate a song and then have it split into multiple tracks that you can remix and tweak independently. Maybe a section of track is pretty good but you want to switch out a specific instrument. Maybe describe what kind of beats you want to the tool and have it generate multiple potential interpretations, which you can start to combine and build up into a proper track. I think ideally I'd be able to describe what kind of mood or vibe I'm going for, without having to worry about any of the musical theory behind it, and the tool should generate what I want.
canogat
[dead]
adefa
If you missed it, check out this MusicFX DJ: https://labs.google/fx/tools/music-fx-dj
It's pretty fun :)
https://imgur.com/a/ohTZXZ0
achow
https://deepmind.google/technologies/lyria/
Lyria 2 is currently available to a limited number of trusted testers
broof
Ai music has been awesome for me, not because the music is that good, but because it gives me the ability to do something that I couldn’t have done myself. I use it all the time for my DnD group, songs about characters, funny moments, backstories, it’s a great tool that our players have found to increase engagement with the game
rkagerer
Are there any particularly good samples anyone can point out?
The 2-3 clips I listened to in the article sounded awful (my own subjective opinion).
justlikereddit
Classic google approach to AI.
"We made something really fancy"
"Oh you wanted to try it out for yourself instead of just reading our self-congratulatory tech demos article? How about fuck you!"
Yeah fuck you too Google, this is why your AI competitors are eating you alive, and good riddance
ein0p
It's kinda like Suno, except Suno sounds pretty good sometimes. Even so, I played with Suno for a few days and lost interest. There are some amazing examples on Suno, though: https://suno.com/song/9a7fd58e-132c-4ac5-9a25-f40d7f6f8c9f. This is one of the early tunes, it probably can do better now.
n_kr
Is there a model which can generate vocals for an existing song given lyrics and some direction? I can't sing my way out of a paper bag, but I can make everything else for a song, so it would be a good way to try a bunch of ideas and then involve an actual singer for any promising ideas.
xyproto
US only.
collias
I find this to be profoundly depressing.
I've just recently re-discovered the joy of writing my own songs, and playing them with (actual) instruments. It's something I get immense pleasure from, and for once, I'm actually getting some earned traction. In another life, I may have been a musician, and it's something I fantasize about regularly.
With all these AI-generated music tools, the world is about to be flooded with a ton of low-effort, low-quality music. It's going to to absolutely drown out anyone trying to make music honestly, and kill budding musicians in their crib.
I suppose this is the same existential crisis that other professions/skills are also going through now. The feeling of a loss of purpose, or a loss of a fantasy in learning a new skill and switching careers, is pretty devastating.
antononcube
The creation of music by AI brings to mind a quote from David Bowie:
“Music itself is going to become like running water or electricity. So take advantage of these last few years, because this will never happen again. Get ready for a lot of touring, because that's the only unique experience left.”
While Bowie had different reasoning for making that statement, it's interesting to think that with AI-generated music, his idea of "music like water or electricity" might finally come true.
mirkodrummer
Strong opinion ahead: the very moment people will finally realize that music is in the microscopic nuances of the human touch, breath and taste(literally for every instrument), hopefully we will get disinvested in this useless technology. Yes, I am aware of software like pro tools, but that can ba used well for touch up all tue nueances
DuckOnFire
AI can make music but not fold my socks?
chaosprint
It seems inevitable now. I used to think AI music would always sound compromised in term of audio quality, but the tech seems to have crossed a threshold, kind of like Retina displays did for screens.
Soon, hiring people for commercial background music might be rare. Think AI for jingles, voiceovers, maybe even the models and visuals. Cafes can use AI-generated music too – in a way, the owner curates or "creates" it based on their taste.
But there are still interesting parts to human music making: the unpredictability and social side of live shows, for example. Maybe future music releases could even be interactive, letting listeners easily tweak tracks? Like this demo: https://glicol.org/demo#ontherun