Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

WATCH: Adobe’s project allows anyone to create and tweak music using AI

"It’s a kind of pixel-level control for music."

Adobe has announced the development of an AI-generated music production tool that can create tracks based off of text prompts, then allow the user to make more precise adjustments. The aptly-named Project Music GenAI Control borrows from Adobe’s Firefly AI model to allow text input from users to describe a genre and mood, like “powerful rock,” “happy dance,” or “sad jazz.” Once the base track is generated, the ability to quickly edit audio is baked into the program’s workflow.

OpenAI begins C2PA compliance by adding watermarks to AI-generated images

“One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio,” says Nicholas Bryan, Senior Research Scientist at Adobe Research. “It’s a kind of pixel-level control for music.”

While still under development, Adobe is framing Project Music GenAI Control as a user-friendly music production method for those lacking a traditional audio production foundation. By leveraging the strengths of AI, the program allows anyone to develop a soundtrack for their personal or professional projects.

“With Project Music GenAI Control, generative AI becomes your co-creator,” expands Bryan. “It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.

 

Featured Articles

Close