Skip to main content

Adobe’s AI video model is here, and it’s already inside Premiere Pro

Adobe’s AI video model is here, and it’s already inside Premiere Pro

/

New beta tools allow users to generate videos from images and prompts and extend existing clips in Premiere Pro.

Share this story

Adobe’s Firefly Video Model can generate a range of styles, including ‘realism’ (as pictured).
Adobe’s Firefly Video Model can generate a range of styles, including ‘realism’ (as pictured).
Image: Adobe

Adobe is making the jump into generative AI video. The company’s Firefly Video Model, which has been teased since earlier this year, is launching today across a handful of new tools, including some right inside Premiere Pro that will allow creatives to extend footage and generate video from still images and text prompts.

The first tool — Generative Extend — is launching in beta for Premiere Pro. It can be used to extend the end or beginning of footage that’s slightly too short, or make adjustments mid-shot, such as to correct shifting eye-lines or unexpected movement.

Clips can only be extended by two seconds, so Generative Extend is only really suitable for small tweaks, but that could replace the need to retake footage to correct tiny issues. Extended clips can be generated at either 720p or 1080p at 24 FPS. It can also be used on audio to help smooth out edits, albeit with limitations. It’ll extend sound effects and ambient “room tone” by up to ten seconds, for example, but not spoken dialog or music.

The new Generative Extend tool in Premiere Pro can fill gaps in footage that would ordinarily require a full reshoot, such as adding a few extra steps to this person walking next to a car.
The new Generative Extend tool in Premiere Pro can fill gaps in footage that would ordinarily require a full reshoot, such as adding a few extra steps to this person walking next to a car.
Image:Adobe

Two other video generation tools are launching on the web. Adobe’s Text-to-Video and Image-to-Video tools, first announced in September, are now rolling out as a limited public beta in the Firefly web app.

Text-to-Video functions similarly to other video generators like Runway and OpenAI’s Sora — users just need to plug in a text description for what they want to generate. It can emulate a variety of styles like regular “real” film, 3D animation, and stop motion, and the generated clips can be further refined using a selection of “camera controls” that simulate things like camera angles, motion, and shooting distance.

A screenshot showing the camera control options for Adobe’s text-to-video Firefly AI model.
This is what some of the camera control options look like to adjust the generated output.
Image: Adobe

Image-to-Video goes a step further by letting users add a reference image alongside a text prompt to provide more control over the results. Adobe suggests this could be used to make b-roll from images and photographs, or help visualize reshoots by uploading a still from an existing video. The before and after example below shows this isn’t really capable of replacing reshoots directly, however, as several errors like wobbling cables and shifting backgrounds are visible in the results.

Here’s the original clip...
Here’s the original clip...
Video: Adobe
...and this is what it looks like Image-to-Video ‘remakes’ the footage. Notice how the yellow cable is wobbling for no reason?
...and this is what it looks like Image-to-Video ‘remakes’ the footage. Notice how the yellow cable is wobbling for no reason?
Video: Adobe

You won’t be making entire movies with this tech any time soon, either. The maximum length of Text-to-Video and Image-to-Video clips is currently five seconds, and the quality tops out at 720p and 24 frames per second. By comparison, OpenAI says that Sora can generate videos up to a minute long “while maintaining visual quality and adherence to the user’s prompt” — but that’s not available to the public yet despite being announced months before Adobe’s tools. 

The model is restricted to producing clips that are around four seconds long, like this example of an AI-generated baby dragon scrambling around in magma.
The model is restricted to producing clips that are around four seconds long, like this example of an AI-generated baby dragon scrambling around in magma.
Video: Adobe

Text-to-Video, Image-to-Video, and Generative Extend all take about 90 seconds to generate, but Adobe says it’s working on a “turbo mode” to cut that down. And restricted as it may be, Adobe says its tools powered by its AI video model are “commercially safe” because they’re trained on content that the creative software giant was permitted to use. Given models from other providers like Runway are being scrutinized for allegedly being trained on thousands of scraped YouTube videos — or in Meta’s case, maybe even your personal videos — commercial viability could be a deal cincher for some users.

One other benefit is that videos created or edited using Adobe’s Firefly video model can be embedded with Content Credentials to help disclose AI usage and ownership rights when published online. It’s not clear when these tools will be out of beta, but at least they’re publicly available — which is more than we can say for OpenAI’s Sora, Meta’s Movie Gen, and Google’s Veo generators.

The AI video launches were announced today at Adobe’s MAX conference, where the company is also introducing a number of other AI-powered features across its creative apps.