It’s been awhile since I’ve posted much about the advancements of current AI Tools, and although I’ve had some successes in utilizing many of them for projects, we’re still a long way from AI doing ALL the work for producers/content creators. I’ll share some examples from my workflows and take a peek into the near future as these tools develop and become more useful.
Like most of us, I get excited every time I see a “new development” pop up in someone’s feed that’s using a beta version of some AI tool, only to find out that they didn’t just use that software tool to achieve their final results, but rather used an alpha version along with a lot of other tools – and often used as an API in another tool or programming to get the finished look.
As most of my readers already know, I’ve been following this progression of AI tool development for a few years and I also try to implement it wherever possible to produce the results I need for my clients. The above image is from an animation I did (below in the article) that utilized both Generative AI imaging and animation, but also layers of particle effects in Adobe After Effects. So again for me, AI so far is just another tool; albeit both powerful and yet unwieldy. It’s quite the beast to be tamed!
I’ve lauded the advancements that come from HeyGen and ElevenLabs, but I’ll focus more on them next month, but just as for example of what we’ve developed to date that I can share, I’ll cover what I created in my recent virtual workshop for the CreativePro Design + AI Summit that was in December.
Design + AI Summit Conference Session (Video + AI: Revolutionizing Visual Storytelling)
For my presentation this year, I decided that nobody wanted to see me yapping on a Zoom cam at my desk for an hour so I thought with some planning and prep, I could simulate being on a small stage for a “Ted Talk” using some of the very tools I was going to share with the virtual audience.
Although I share a lot more info and various tools and examples throughout my talk, I started with a quick walking intro before appearing “on stage” with AI generated audience. This short video gives you the breakdown of just how I accomplished this – including the errors that were generated with the HeyGen walking avatar software. Using just three tools to complete this process from content creation to animation and finally compositing everything in Adobe After Effects & Premiere Pro:
I’ll be digging into HeyGen and other tools in more depth in my next articles coming soon.
So moving on to something a little more current; developing a completely AI content workflow proves to be much more challenging than one might think looking at all the posts and ads you’ll see on social media.
How many of you have seen amazing stuff like this from these AI tool providers? HeyGen, Runway, KREA, KLING, Minmax Hailuo, etc…
They’re all advancing at breakneck pace and competing for coveted space. I try to keep up with what I can, but even the modest subscriptions add up to hundreds of $$/mo and when you’re not getting paid gigs regularly, it’s hard to even spend much time with them and really explore or gain much traction. I only opt for the annual rate on the tools I know I can use the most but for testing purposes I usually only get a month or two and then stop until I see new released updates later.
Fun templated tools pop up to distract us of course, like my most recent flirt with the tool called Pika. I’ve tried it before just with T2I (Text to Image) which it seemed to work well enough but didn’t stand out until this latest release. You can use their mobile app or website to generate image to video animations with their premade templates and the results can be surprising.
Just something to pass the time between actual project renders, I played with a couple images I had on my phone. One was my headshot and the other an AI generated satirical caricature toy. Neither worked perfectly, but effectively fun to share on social media or with friends anyway.
But in reality, how much do these tools actually help in providing the content you really want?
AI Animation Project Breakdown: What worked? What didn’t?
It’s all subjective – and IT’S NOT CHEAP TO EXPERIMENT!! I used up a full month’s worth of credits using all these apps just to get enough source content for this sub-5 minute animation for a project recently. Most of what I used was from a combination of source imagery/animation along with basic particle generation in Adobe After Effects.
Here’s our final resulting video:
Here’s my breakdown for the production, starting with the audio recording that I built the video from…
I used an audio recording that the narrator recorded on her iPhone and just sent me the file. There was a little background noise from a space heater and a little room reverb, so I took the file into Adobe Podcast Enhance online and got a really clean audio track to work from in Adobe Audition.
I then used Microsoft 365 Word to use their AI transcription that’s built in and got a clean voice to text file to work from, creating images and animations I needed.
I started with getting some base images to work from first, so I went to Midjourney and started with literally copy/paste from the transcription just to see what it would give me in return.
For the most part, I was rewarded with some very useful ideas that I could build on, so I’m still a fan of Midjourney when it comes to imagining various scenarios.
So I started with the lead image of Pandora with the box and then trying to get ANY of the AI animators to work on it – just open the top of the box seems like an impossible task for most of them. I first tried Runwayml to no avail. It completely changed out the originating image and then on subsequent attempts it gave errors that the results weren’t safe and I couldn’t view/download them.
I then went to Minmax Hailuo to try the image to video option and it couldn’t handle this request either. It couldn’t open the box correctly and often placed the black smoke somewhere else in the scene but not coming out of the box. This is the most frustrating part of these AI tool is that you really don’t have that much control without knowing exactly how to prompt the shot – and then you use up all your render credits trying to fine-tune them!
I then tried to see if it could just generate an animation of Pandora opening the box with black smoke escaping and while it did give me realistic figures, again the mechanics of the box/smoke were totally lost. After subsequent attempts to correct the prompt with specific keywords, it just got worse and worse.
I also tried Leonardo and Krea using Text 2 Image and couldn’t get satisfying results.
I then tried a few renders in Kling and finally got what I could use in the final video – however what you don’t see is the box top opening as it was messed up, but the timing and mechanics of it, but I was fading up from black then anyway and rest came out well.
Once I had this first and most complex clip generated, all the subsequent shots I was able to use several different tools to give me the effect I was looking for – especially since I didn’t really have any particular “style” in mind, so the results are fairly all over the place between illustrated ethereal and realism.
However, I couldn’t really get exactly what I wanted for each shot, so I had to use After Effects to composite several layers over each other with masks and particle effects to produce a lot of content. I also had to loop several animations for the background textures. At least I could set up all my timing from the VO track so I could provide a more seamless animation sequence in the more complex sections.
But in the end, this animation was still 100% CG including the AI music (except for the original voice recording).
Looking ahead…
So not only will I be digging into the updates from HeyGen, but I will be exploring some of the music/audio tools that are currently out there. Like Suno, which does an amazing job with lyrics and melodies by category.
Stay tuned…
![](https://www.provideocoalition.com/wp-content/themes/understrap-child-master/img/website_small.jpg)
Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop Now