Adobe previewed its upcoming generative text-to-video AI model, showcasing images and videos made with the new tool. The company is planning to release the tool in a beta phase towards the end of 2024.
An announcement video features footage made with Firefly’s text-to-video tool, which generates whole video clips from text prompts.
As explained in the video, creators can then refine the resultant clip with a range of “camera controls.” These simulate things like motion, shooting distance, and camera. Adobe also demonstrated an image-to-video feature.
According to The Verge, the Firefly footage Adobe shows in this YouTube video is “on par” with footage made by Sora, Open AI’s video model. But Sora caused concerns for many creators, who worried it could replace the labor of stock footage videographers, video editors, and motion artists.
How was Adobe’s Firefly AI tool trained?
However, it seems like Firefly will be different from Sora in one major way. Previously, OpenAI faced backlash over how it trained Sora. However, Adobe has assured creators that Firefly is “commercially safe” and has only been trained on licensed content.
“What differentiates the Adobe Firefly Video Model that was previewed today from other generative AI video offerings is that it is designed to be commercially safe and only trained on content we have permission to use — never on Adobe customer content,” Ely Greenfield, chief technology officer at Adobe, told Axios. “It will be integrated directly into Adobe workflows to help video editors reach new levels of creative control and efficiency.”
This comes at a very fraught time in the creator economy, with numerous strikes taking place as creators fear the implication of AI in the future of copyright and job security.