Site icon ProVideo Coalition

Burning Questions about Generative Extend in Premiere Pro (beta)

Burning Questions about Generative Extend in Premiere Pro (beta) 1

Generative Extend is finally in beta for Adobe Premiere Pro. Unlike Will Smith eating spaghetti or disturbing toy company ads or astronauts wearing knit caps, Generative Extend is AI-generated video that will actually be useful. We are far enough into a world of AI-generated video that we only seem to be impressed when the result is exceptionally realistic or just plain weird. While it’s amazing to be able to produce a full-resolution, full-frame video just by typing in some text prompts, generative video is still of limited usefulness in professional video editing and post-production. 

Adobe’s Generative Extend in Adobe Premiere Pro feels to me like one of the first things that is going to be truly useful as far as AI-generated video goes for professional post-production. It feels revolutionary enough that I think it might take some understanding of how it works to appreciate how it can be used. 

What better way to understand Generative Extend in Adobe Premiere Pro than answer some burning questions (in the burning questions tradition) that I’m sure every editor might have. While these are my thoughts on Generative Extend, Adobe has their own FAQ as well.

How do I get Generative Extend?

It’s currently only available in the Adobe Premiere Pro Beta. Update from your Creative Cloud app but be warned that this is a full version update of Premiere so it won’t be backwards compatible with older beta projects. 

How do you use Generative Extend?

There’s a brand new tool at the bottom of the PPro toolbar. Select that tool, grab the heads or the tails of a clip and drag the start or the end out for up to two seconds of new, AI generated video. The Gen Extend tools works like the Rate Stretch Tool in that you’re not trimming two clips you the head or tail has to be free to trim so no clips directly before or after.  

How does this whole Generative Extend thing work in the application?

The first thing to know is there is a new Scratch disk option under the Project Settings for Generative Assets. IMHO the scratch disks didn’t need any more additions or clutter, but generative video that is being … generated … has to be placed somewhere, so this is it.

There’s also a new preference for AI Models. This will be an important preference as new AI models are introduced.

How many seconds of video will Generative Extend actually create?

You can get 2 seconds of newly generated video either at the top of the clip or at the tail. I have to be honest; if you really need 2 full seconds of non-existent video more than once or twice in your video, then you or the person you are editing for has probably done something wrong. A few frames here and there? Maybe it will be extremely useful for that.

Does Generative Extend work with audio?

It does and you can extend audio up to 10 seconds. You can use the new Generative Extend tool to extend both audio and video at once or linked media or one at a time.

How does Gen Extend work in the timeline?

When you generate a new piece of video in the timeline, it is marked with a black AI-generated marker so you can easily see what has been AI generated. The original video clip and the new generated video are then put into a nest so they appear as one clip in the timeline.

A new sequence will appear in your project, since that’s what happens whenever you nest any clips. That new sequence will be named with the same name as the original clip you are generating the new media for, plus a timecode suffix to mark where the generative clip begins in that nested timeline.

Above are the new generative extend sequence nests that are created when you use Gen Extend.

A new folder called ‘Generated Media’ and a subfolder called ‘Generated Extend’ is also created in the project where the master clips that were AI generated live. Since these are new master clips, they could be edited and used just like any other master clip. However, you probably want to keep them associated with the original clip that you wanted to extend.

Since we get this ‘Generated Extend’ sub-folder, is that a clue that we may get more options for generating AI media right from within Premiere? That’s probably a pretty good guess.

You do get some feedback in the timeline while Generative Extend is working.

I’ll go on record and say I’m not a fan of the mechanics of how Generative Extend is adapted for your timeline. I understand that Adobe had some choices to make here, and what they’ve chosen to do is probably the easiest and most user-friendly option for many workflows.

Is some of my EXISTING video uploaded to the Adobe Cloud to create new video with Generative Extend?

Yes, a couple of seconds of existing video does get uploaded to the Adobe cloud. It has to use something to generate that new video. The generative model used has to live in the cloud, as it’s too big and too complex to be local within Premiere. Will that change in the future? Maybe. Video requires 2 seconds of footage and audio requires 3 seconds. I was actually quite shocked when I heard that it only needs 2 seconds of video to produce some pretty quality results.and video. I know you may be only producing a couple of frames, but it’s doing a lot of work with those two seconds.

The AI-based Essential Sound Enhance feature, while first being cloud-only, is now local in Premiere. However, the model isn’t as big as what lives in the cloud, so you often can get a better result by uploading to the cloud vs. using Enhance locally. All that to say, I guess these AI models can be pretty big, and you just can’t get as good of results locally as you can by doing the processing in the cloud.

Is my original video that had to be uploaded to the cloud for Gen Extend safe in the cloud?

The Progress Dashboard will track the progress of Gen Extend.

It is my understanding that the video that has to be uploaded to the cloud for Generative Extend to work is only there for a short period of time and will be overwritten on Adobe’s servers as new source video gets uploaded for Gen Extend.

Where does the newly generated AI media live?

Remember that new Scratch Disk setting above? That is where the newly generated media will live. So you have a choice of where to place it as the new media is downloaded from Adobe’s cloud.  

These aren’t video preview render files that you might delete, can easily regenerate or might not need in the future. Placing this new Gen Extend storage preference in the scratch disks seems a bit odd to me. It feels more important than that, but it’s probably just as important as your Motion Graphics template media, so that’s why it goes there. Perhaps we’re getting close to a Scratch Disk redesign. We certainly could use it.

And if the generated clips go offline, you will be asked to relink.

When this feature comes out of beta, you’ll most likely want these generated clips to live with your source clips, so they’re always there and easy to archive. Which brings me to

Can I use Generative Extend in an offline/online workflow?

You may not want to use it now since it’s limited in its resolution and bit depth.

Is there a keyboard shortcut for Generative Extend?

There isn’t a default keyboard shortcut for the new Gen Extend tool but you can map it.

What format is the newly generated media?

Currently, in the beta, any newly generated video will be limited to an H.264 .mp4 that matches the frame rate of your sequence. These are the exact specs of what it will generate in the current beta:

That will change once Gen Extend comes out of beta. It’s understandable that Adobe doesn’t want people throwing giant ProRes files all over the internet during the beta phase as they perfect the models and the workflow.

I can tell you that Adobe knows that editors want these newly generated clips to match the video preview settings of your sequence and that makes the most sense IMHO. If you’re rendering video effects and Render and Replace files to ProRes and you’ll want your Gen Extend to files to be ProRes as well. That puts them in a good editing format and possibly in your final output for faster exporting. If my whole edit is made up of nice, buttery smooth ProRes files, I certainly want any generated media to be the same. That said, it takes much more bandwidth to upload and download ProRes files than an H.264. Adobe is attempting quite a daunting task here.

Okay, but again, how will I use Generative Extend in an offline/online workflow?

Well, currently, you would have to extract the .mp4 files from the designated scratch disk folder and then over-cut them in the timeline with the original clip you’re extending. That would be manually removing the Generative Extend nest that is created. I can’t find any way to un-nest it, and the Enable > Flatten multicam trick doesn’t work on the Gen Extend nests.

⬆️ Black and white footage? You get black and white Generative video of course.

But again, this is all still in beta, so you’re not going to be using the results in high-end offline/online workflows. My guess is the mechanics of this might get updated before it ships, because if you are going to use this in any kind ofworkflow where you have to conform and edit in a color grading app, it will be challenging as it stands.

Can I continue working while Generative Extend is doing its thing?

You can, as Premiere will do its uploading, generating, and downloading all in the background. It’s like warp stabilizer, as you get a little warning in the Program monitor if you happen upon Gen Extend working.

Okay, how good is Generative Extend?

It’s pretty good. I commented in an NAB focus group that if Generative Extend actually works, then I would eat my shoe. I think I probably need to get used to the taste of leather. It’s certainly not perfect on every clip (nor can it be expected to be) and it’s not going to work in every situation, but from all the testing I’ve done, there will be a lot of times when it will work very well.

How can Generative Extend be used in real-world editing?

While I’m sure, everyone will just grab a ton of clips and generate the full 2 seconds while testing in beta, I think the thing to think about is how you would use this in the real world. You’re most likely not going to want to generate a full 2 seconds – in fact, you may only need a partial second or a few frames. Then think about that in the context of the edit – as most viewers will see the bit of generated media play out across a full story with clips on either side. This context means the results will be quite usable and very often good enough.

⬆️ Watch the smoke closely in this clip above. Can you tell what was AI Gen Extend and what wasn’t?

What are the current limitations that you see smart guy since you’ve been testing Generative Extend?

Since your new clip is literally cutting from your old media to your newly generated extended clip, sometimes I’ve seen a bump in the brightness of the new clip and it’s quite noticeable. That would make it unusable.

⬆️ Keep an eye on the skin. (Thanks to EditStock for the footage).

⬆️ There’s flicker in the generated video at the end but there was a subtle flicker in the original footage.

If you have text or a logo that is obscured before Gen Extend goes to work, it won’t be able to generate that text or logo because it can’t see what that text or logo might be. And that’s obvious. But it is good at being able to replicate text and a logo if it has a clear view of it in the clip you are extending.

Can I generate 2 seconds at the head and 2 seconds at the tail of an existing clip, resulting in 4 total new seconds?

You can, but you’d have to go into the nest and recover the original source clip. As soon as you let go after dragging out the head or tail with the Generative Extend Tool the clip is placed into the nest as described above. I’ve found that I like to duplicate the clip onto the track above that I’m wanting to extend. I use a very similar workflow with Render and Replace, or a dynamic link to After Effects. That way I have quick and easy access to the original source clip.

Can I extend from the middle of a clip?

You can. The Generative Extend tool doesn’t care how much media you may or may not have left in a clip. If you don’t like where a clip goes, just place an edit where you want Gen Extend to begin and then use the Generative Extend tool to see what the AI will give you for the next couple of seconds.

⬆️ Above is four different generations of the tail of the same shot. It’s interesting to watch the differences.

Any final thoughts on this new Generative Extend in Premiere Pro?

I’m really impressed with what I’ve seen playing with this new technology. Many of the clips above show things going wrong but I don’t think that is the norm. My original thought was that it would be difficult to match things like camera movement, less-than-stellar lighting and just some of the general things that happen when you’re shooting B-roll. But the AI is doing a really good job. It feels like where I see it falling down; it’s in some of the sharper detail, such as thevideo examples above. You can see skin get a little too smooth, or you can see the smoke not have the crispness once the AI model has generated new video. Other than that, a lot of it is really, really good and very usable in the context of an edit.

⬆️ Not everything is always perfect but there’s a Generate Again submenu where you can have it go again.

I’m happy to finally see a useful, generative video project for many of us editing in the trenches in professional post-production. We don’t need to generate brand-new clips from scratch; we just need help fixing some of what has already been shot. Isn’t that part of the mantra of media creation these days? And I do fear this might become more the norm:

Fix it in post.

Exit mobile version