Site icon ProVideo Coalition

For me, Artificial Intelligence in Post has mostly been a bust…until now.

For me, Artificial Intelligence in Post has mostly been a bust...until now. 2

I’ll be honest.  For me, much like the 3D phase for post, Artificial Intelligence has really been a big bust.  I can’t think of any Artificial Intelligence workflows that have enhanced my workflows, or made them better.  I’ve seen the videos from Adobe for what’s coming to Premiere and, to be honest, Adobe’s let me down with all the enhancements that were added that would “revolutionize” my workflows in the past.  Now, don’t get me wrong, I’m sure that there are certain bits and pieces of Artificial Intelligence workflows that help people out, but there really isn’t anything that I can say I use on a regular basis in my workflows.  Until now.  A couple of months ago, I had a “THAT’S IT” moment, where I can easily see myself not only using an AI tool in a whole bunch of different projects, but also where I can easily see this “machine learning” artificial intelligence workflow going in the future.  Good job Boris FX, good job!

INTELLIGENT….IT CERTAINLY IS

Now, I’ll start this article out by saying that I’m not getting a dime from Boris FX for this article.  I’ve been a Continuum since before it was offered free to all Media Composer editors who upgraded to Symphony (I know Media Composer editors all remember that), and have used it in After Effects for as long as I can remember.  Most of my “WOW” benchmarks for advancing my workflows haven’t come from the NLE or Compositing applications.  It’s been from Continuum.   

I mean, let’s be honest, Boris FX have pulled out some pretty surprising acquisitions over the last few years.  GenArts, Imagineer Systems, WonderTouch, Syntheyes, and even the licensing of Primatte technology has really made Boris FX the one stop shop for just about anything an editor or compositor could need.  For me, the biggest leap forward in, easily, the last 15 years has been the integration of Mocha technology not only across almost all of the effects inside of both Continuum and Sapphire, but its licensing in After Effects, which easily makes it the standard for tracking in AE today.  So, you’re probably thinking, what does this have to do with AE and machine learning.  Well, Boris FX just released the 2024.5 update for Continuum, and in it, is a tucked away look at the future of the effects package.  Believe it or not, it’s the Witness Protection effect that will lead the way to the next generation of effects in Continuum, and in the process, save editors and graphic designers countless hours of tedious work, even with the best tools available now.

Now, since everything these days is called “SOMETHING AI”, Boris FX decided to go down a bit of a different path by calling theirs ML or “MACHINE LEARNING”, and you can find the four “ML” effects in Continuum, simply by searching for them.

So, looking at the Media Composer version pictured above (the ML effects are available across the other host applications of Continuum as well), you’ll notice that there are actually four different ML effects including DeNoise, ReTimer, UpRez and Witness Protection which, for Media Composer, is a real-time effect.  So, what is the Witness Protection effect exactly?  Well, you’ve seen it a million times before.  Need to blur someone’s face out who’s walking down the street, as you don’t have permission to use their likeness in your production?  That’s where you would use an effect like this.  However, it’s worked very differently in the past and, to be honest, the effect went from wonky to very cool to awesome.  It started out wonky as it used the Continuum tracker to do all the motion tracking.  We all know how terrible point tracking can be, and having Mocha integrated with almost all the effects in Continuum really stepped this effect up a notch, as it made the tracking process much easier and much more precise.  It was, however, not without its issues.  If the talent walked behind a tree, or lamppost, or other object, it would require more work to be done inside of Mocha and, really, any time this type of effect was required, it always came with a bit of cringe from the editor, as we know how much time it really took to do this type of work, and it could be painfully slow.  Well, not anymore.  How does it work?  Drag and drop.  Yep.  That’s it.  Drag the effect (or apply it, depending on the application you’re using), and that’s it.  Talent walks behind something?  No problem.  The ML (Machine Learning) effect will be dropped back on as they come back out from behind it.  Does your character walk on or off screen?  Again, no problem, as ML will add the effect back on when they reappear.  Take a look at what I mean below:

The effect still contains everything else that you had available to you before, like the ability to switch to a mosaic pattern instead of a blur if you want to, and you can even turn ML off all together if you wanted to apply the effect to something different like a logo on someone’s shirt.

With that said, this is where I really see the potential in this effect.  Right now, the ML component is designed to detect faces, and basically add an ellipse to it as a mask, to then have Continuum either blur or add a mosaic to someone’s face.

I was floored by how quick and accurate it was.  The only adjustment I actually had to make was to add a bit of a feather to the mask, and make it slightly bigger, but otherwise, it did all the work for me.  It’s the first time that I’ve done anything with AI and thought “HOLY ****, I CAN ACTUALLY SEE MYSELF USING THIS ON A REGULAR BASIS”.  Now, let’s take a look at this effect moving forward.  What about logos on shirts or on products?  What about the ability to blur out nudity?  What about the ability to look at a transcript and blur over someone’s mouth who swears.  Now these applications are something that editors, especially ones who work on reality TV can really use in their day to day workflow that will save them an absolute ton of time in the compositing chair.  We can even look across other effects in Continuum to see where applications like this can speed up our workflows.  For example, in any lens flare effect.  Simply type in what you want your lens flare “attached” to, and you’ve already saved me a ton of time.  Sun, headlight, flashlight.  Something so simple, can save use minutes and even hours of time tracking.    For me, this one effect has gotten me excited about Artificial Intelligence/Machine Learning in my NLE/Compositing application, as it’s something that I can easily see a wide range of editors using, in all different types of productions.  For more information about Continuum 2024.5, you can check it out at borisfx.com .

Exit mobile version