While the hosts of the Alan Smithee podcast already discussed the evolving landscape in media and entertainment as 2024 draws to a close, there’s so much more to say about what happened in 2024 and what 2025 has in store for individual creators and the entire industry. Generative AI for video is everywhere, but how will that pervasiveness impact actual workflows? What were some of the busts in 2024? How did innovations in cameras and lenses make an impact? And what else are we going to see in 2025 beyond the development of AI animation/video tools?
Below is how various PVC writers explored those answers in a conversation took shape over email. Keep the conversation going in the comments or on LinkedIn.
2024? Of course, the first thing you think about when recapping 2024 (and looking ahead to 2025) is that it was all about artificial intelligence. Everywhere you look at technology, media creation, and post-production, there is some mention of AI. The more I think about it, though, the more I feel like it was just a transitional year for AI. Generative AI products feel like old hat at this point. We’ve had “AI” built into our editing tools for what feels like a couple of years now. While we have made a lot of useful advancements, I don’t feel like any earth-shattering AI products shipped within any of our editing tools in 2024. Adobe’s generative extend in Premiere Pro is probably the most useful and most exciting AI advancement I’ve seen for video editors in a long time. But it’s still in beta, so Gen Extend can’t count until it begins shipping. The AI-based video searching tool Jumper shipped is a truly useful third-party tool. Adobe also dropped their “visual search” tool within the Premiere beta, so we know what’s coming next year as well, but it’s far from perfect at the time of this writing, and still in beta. I appreciate an AI tool that can help me search through tons of media, but if that tool returns too many results, I’m yet again flooded with too much information.
The other big AI-based video advancement is text-based generative video coming into its own this year. Some products shipped, albeit with quite a high price to make it truly usable. And even more went into preview or beta. 2025 will bring us some big advancements in generative video and that’s because we’re going to need them. What we saw shipping this year was underwhelming. A few major brands released AI-generated commercial spots or short films, and they were both unimpressive and creepy. I saw a few Generative AI short films make the rounds on social media, and all I could do after watching them was yawn. The people that seemed most excited by generative video (and most trumpeting its game-changing status) were a bunch of tech bros and social media hawks who didn’t really have anything to show other than more yelling on social media from their paid and verified accounts or their promoted posts.
Undoubtedly, AI will continue to infiltrate every corner of media creation. And if it can do things like make transcription more accurate or suggest truly usable rough cuts, then I think we can consider it a success. But for every minor workflow improvement that is actually useful in post-production, we’ll see two or three self-proclaimed game-changing technologies that end up just being … meh.
In the meantime, I’ll happily use the very affordable M4 Mac mini in the edit suite or a powerful M4 Mac Studio that speeds up the post-production process overall. We can all be thankful for cloud-based “hard drives,” like LucidLink, that do more for post-production workflow than most AI tools that have thrown our way. Maybe 2025 will be the year of AI reality over AI hype.
While I’m aware that the issues we’ve been facing on the writing/producing/directing side don’t affect many of the tech teams on the surface, it has been rather earth-shattering on our side of things. Everyone fears losing their jobs to AI and with good reason. I am in negotiations with a European broadcaster who really likes the rather whacky travel series I’m developing but, like so many b’casters now, simply don’t have enough cash to fully commission it. They flat out told me that I would write the first episode, and they would simply feed the rest of the episodes into AI so they wouldn’t have to pay me to write more eps. I almost choked when they said that matter of factly, and I responded by saying that this show is comedy and AI can’t write comedy! Their response was a simple shrug of the shoulders. Devastating for me and with such an obvious lack of integrity on their part, I’m now concerned that they are currently going ahead with that plan and we don’t even have a deal in place. So, from post’s perspective, that is one more project that isn’t being brought into the suite because I can’t even get it off the ground to shoot it.
As members of various guilds and associations, we are all learning about the magnificent array of tools, we now have at our fingertips for making pitches, sizzle reels, visual references, etc. It really is astonishing what I now can do. I’m learning as much as I can while I just can’t shake the guilt knowing that I’ll be putting the great graphic designers I used to hire out of work. If budgets were a little better, I would of course, hire a graphic designer with those skills, but as things stand today, I can’t afford to do that.
It’s definitely a fascinating and perplexing time!
AI in various forms gets the press, but in most cases it will continue to be more marketing hype than anything else. Useful? Yes. True AI? Often, no. However, tools like Jumper can be quite useful for many editors. Although, some aspects, like text search, have already existed for years in PhraseFind (Avid Media Composer).
There are many legal questions surrounding generative AI content creation. Some of these issues may be resolved in 2025, but my gut feeling is that legal claims will only just start rolling for real in this coming year. Vocal talent, image manipulation, written scripts, and music creation will all be areas requiring legal clarification.
On the plus side, many developers – especially video and audio plugin developers – are using algorithmic processes (sometimes based on AI) to combine multiple complex functions into simple one-knob style tools. Musik Hack and Sonible are two audio developers leading the way in this area.
One of the less glitzy developments is the future (or not) of post. Many editors in major centers for film and TV production have reported the lack of gigs for months. Odds are this will continue and not reverse in 2025. The role for (or even need for) the traditional post facility is being challenged. Many editors will need to find ways to reinvent themselves in 2025. As many business are enforcing return-to-office policies, will editors find remote work to be less acceptable to directors?
When it comes to NLEs, Adobe Premiere Pro and Avid Media Composer will continue to be the dominant editing tools when collaboration or project compatibility is part of the criteria. Apple Final Cut Pro will remain strong among independent content creators. Blackmagic Design DaVinci Resolve will remain strong in color and finishing/online editorial. It will also be the tool for many in social media as an alternative to Premiere Pro or Final Cut Pro.
The “cloud” will continue to be the big marketing push for many companies. However, for most users and facilities, the internet pipes still make it impractical to work effectively via cloud services with full resolution media in real time. Of course, full resolution media is also getting larger and not lighter weight. So that, of course, is not conducive to cloud workflows.
A big bust in this past year has been the Apple Vision Pro. Like previous attempts at immersive, 3D, and 360-degree technologies, there simply is no large, sustainable, mass market use case for it, outside of gaming or special venues. As others have predicted, Apple will likely re-imagine the product into a cheaper, less capable variant.
Another bust is HDR video. HDR tools exist in many modern cameras and even smart phones. HDR is a deliverable for Netflix originals and even optional for YouTube. Yet the vast amount of content that’s created and consumed continues in good old Rec 709. 2025 isn’t going to change that.
2025 will be a year when the rubber meets the road. This is especially true with Adobe, who is adding generative AI for video and color management into Premiere Pro. So far, the results are imperfect. Will it get perfect in 2025? We’ll see.
The last twelve months have brought a huge amount of change. Generative AI might have had the headlines, but simple, short clips don’t just magically scale up to a 30-second spot, nor anything longer. One “insane, mind-blowing” short that recently became popular couldn’t even manage a consistent clothing for its lead, let alone any emotion, dialogue or plot. Gen AI remains a tool, not a complete solution for anything of value.
On the other hand, assistive AI has certainly grown a little this year. Final Cut Pro added automatic captioning (finally!) and the Magnetic Mask, Premiere Pro has several interesting things in beta, Jumper provides useful visual and text-based media search today, Strada looks like doing the same thing soon in the new year and several other web-based tools offer automatic cutting and organizing of various kinds. But I suspect there’s a larger change coming soon — and it starts with smarter computer-based assistants.
Google Gemini is the first of a new class of voice-based AI assistants which you can ask for help while you use your computer, and a demo showed it (imperfectly) answering questions about DaVinci Resolve’s interface. This has many implications for anyone learning complex software like NLEs, and as I make a chunk of my income from teaching people that, it’s getting personal. Still, training has been on the decline for years. Most people don’t take full courses, but just jump in and hit YouTube when they get stuck. C’est la vie.
While assistant AIs will become popular, AIs will eventually control our computers directly, and coders can get a taste of this today. Very recently, I’ve found ChatGPT helpful for creating a small app for Apple Vision Pro, for writing scripts to control Adobe apps, and also for converting captions into cuts in Final Cut Pro, via CommandPost. Automation is best for small, supervised tasks, but that’s what assistants do.
Early in 2025, an upgraded Siri will be able to directly control any feature that a developer exposes, enabling more complex interactions between apps. As more AIs become able to interpret what they see on our screens, they’ll be able to use all our apps quicker than we can. In video production, the roles of editor and producer will blur a little further, as more people are able to do more tasks without specialist help.
But AI isn’t the whole story here, and in fact I think the biggest threat to video professionals is that today, not as many people need or want our services. High-end production stalled with the pandemic and many production professionals are still short of work. As streaming ascends (even at a financial loss) broadcast TV is dying worldwide, with flow-on effects for traditional TV advertising. Viewing habits have changed, and will keep changing.
At the lower end, demand for quick, cheap vertical social media video has cut into requests for traditional, well-made landscape video for client websites or YouTube. Ads that look too nice are instantly recognised as such and swiped away, leading to a rise in “authentic” content, with minimal effort expended. It’s hard to make a living as a professional when clients don’t want content that looks “too professional”, and hopefully this particular pendulum swings back around. With luck, enough clients will realise that if everyone does the same thing, nobody stands out.
Personally, the most exciting thing this year for me is the Apple Vision Pro. While it hasn’t become a mainstream product, that was never going to happen at its current high price. Today, it’s an expensive, hard-to-share glimpse into the future, and hopefully the state-of-the-art displays inside become cheaper soon. It’ll be a slow road, and though AR glasses cannot bring the same level of immersion, they could become another popular way to enjoy video.
In 2024, the Apple Vision Pro was the only device to make my jaw drop repeatedly, and most of those moments have come from great 3D video content, in Immersive (180°) or Spatial (in a frame) flavors. Blackmagic’s upcoming URSA Cine Immersive camera promises enough pixels to accurately capture reality — 8160 x 7200 x 2 at 90fps — and that’s something truly novel. While I’m lucky to have an Apple Vision Pro today, I hope all this tech is in reach of everyone in a few years, because it really does open up a whole new frontier for us to explore.
P.S. If anyone would like me to document the most beautiful places in the world in immersive 3D, let me know?
In 2024, we saw more innovation in both audio-only and audio-video switchers/mixers/streamers and the democratization of 32-bit float audio recording and audio DSP in microphones and mixers. I expect this to continue in 2025. In 2024, both Blackmagic and RØDE revolutionized ENG production with smartphones. In 2024, I began my series about ideal digitization and conversion of legacy analog color-under formats including VHS, S-VHS, 8mm, Hi-8mm and U-Matic. I discussed the responsibility of proper handling of black level (pedestal-setup/7.5 or zero IRE) at the critical analog-to-digital conversion moment, proper treatment and ideal methods to deinterlace while preserving the original movement (temporal resolution). That includes ideal conversion from 50i to 50p or 59.94i to 59.94p as well as ideal conversion from non-square pixels to square pixels, upscaling to HD’s or 4K’s vertical resolution with software and hardware, preservation of original 4:3 aspect ratio (or not), optional cropping of headswitching, noise reduction and more. All of this will continue in 2025, together with new coverage of bitcoin hardware wallets and associated services.
In 2024, at TecnoTur we helped many more authors do wide distribution of their books, ebooks and audiobooks. We guided them, whether they wanted to use the author’s own voice, a professional voice or an AI voice. We produced audiobooks in Castilian, English and Italian. We also helped them to deliver their audiobooks (with self distribution from the book’s own website) in M4B format with end-user navigation of chapters. I expect this to expand even more in 2025.
This past year, we saw many innovations in cameras and lenses. For cameras, we are now witnessing the market begin the movement to make medium-format acquisition easier for “everyday filmmakers” rather than just the top-tier. Arri announced its new ALEXA 265, a digital 65mm camera designed to be compact and lightweight. The new ALEXA 265 is one-third the size of the original ALEXA 65 while slightly larger than the still-new ALEXA 35. Yet, the ALEXA 265 is only available as a rental.
Regarding accessibility for filmmakers, the ALEXA 265 will not be easier to get one’s hands on; that will be reserved for the Blackmagic URSA CINE 17K 65. The Blackmagic URSA CINE 17K 65 is exactly the kind of camera Blackmagic Design and CEO Grant Petty wants to get into the hands of filmmakers worldwide. Blackmagic Design has a long history of bringing high-level features and tools to cameras at inexpensive prices. It is the company that bought DaVinci Resolve and then gave it away for free when purchasing a camera. They brought raw recording to inexpensive cameras early on in the camera revolution. Now, Blackmagic Design sees 65mm as the next feature reserved for the top-tier exclusive club of cinematographers they can deliver to everyone at a relatively decent price of $29,995.00, so expect to see the Blackmagic URSA CINE 17K 65 at rental houses sooner than later. I also wouldn’t let the 17K resolution bother you too much. Blackmagic RAW is a great codec that is a breeze to edit compared to processor-heavy compressed codecs.
We also saw Nikon purchase RED but have not seen the cross-tech innovation between those companies. In years to come, we will see Nikon add RED tech to its Nikon cameras and vice versa.
Sony delivered the new Sony BURANO and I’m seeing the camera at rental houses now. More so, though, I see more owners/operators with the BURANO than anything else. It appears Sony has a great camera that will last a long time for owners/operators.
I feel like I saw a ton of new lenses in 2024, from ARRI’s new Enso’s Primes to Viltrox’s anamorphic lenses. We see Chinese lenses coming in from every direction, which is good. More competition benefits all of us and keeps prices competitive. Sigma delivered their new 28-45mm f/1.8, the first full-frame F/1.8 maximum aperture zoom lens. I tested this lens on a Sony mirrorless, and it felt like the kind of lens you can leave on all day and have everything covered. The depth of field was great in every shot. Sigma has delivered a series of lenses for mirrorless E-Mount and L-Mount cameras at an astounding pace from 500mm down to 15mm.
Canon was miserly with their RF-mount. To me, Canon is protecting its investment into its lens innovation by restricting who can make RF mount lenses. I wish they wouldn’t do such a thing. It can be counter-intuitive to me to block others from making lenses that work on their new cameras. What has happened is that all those other lens makers are making lenses specific to E-Mount and L-Mount. In essence, if you are a BURANO shooter, you have more lenses available than a Canon C400 shooter. The story I tell myself is if I had to buy a camera today, which lenses I could use would be a part of that calculus.
On Artificial Intelligence, AI, we cannot discount how manufacturers use it to innovate quicker and shorten the timeframe from concept to final product faster while saving money. As a creative, I use AI and think of AI in this way: there will be creatives who embrace AI and those who don’t or won’t, and that will be the differentiator in the long run. I already benefit from AI with initial script generation, which is only a starting point, to caption generation and transcription and to using it in Lightroom for photo editing.
The production of high quality video used to be restricted by the cost of the kit needed and the skills required to operate that equipment. Those two things helped to regulate the number of people in the market. The last year though has seen a remarkable acceleration in the downward pricing trend of items that used to cost a lot of money, as well as an increase in the simplification and convenience of their handling. I tend to review equipment at the lower end of the price scale, an area that has seen a number of surprising products in the last twelve months. These days, anamorphic lenses are almost common-place hovering around the $1000 price point, and LED lights have simultaneously become cheaper and much more advanced.
Popular camera brands that until recently only dipped their toe into the video market now offer their own Log profiles, encourage their users to record raw footage to external devices and provide full 35mm frame recording as though it is the expected norm. LUTs can be created on a mobile phone app and uploaded to cameras to be baked in to the footage, and care-free 32bit float audio can be recorded directly to the video soundtrack for a matter of a few hundred dollars and a decent mic. Modern image stabilisation systems, available in very reasonably priced mirrorless cameras, mean we can now walk and film without a Steadicam, and best-quality footage can be streamed to a tiny SSD for long shoots and fast editing. Earlier this year I reviewed a sub-$500 Hollyland wireless video transmitter system that, with no technical set-up, can send 4K video from an HDMI-connected camera to four monitors or recorders – or to your phone where the footage can be recorded in FHD. I also reviewed the Zhiyun Molus B500 LED light that now provides 500W worth of bi-coloured illumination for less than $600, and the market is getting flooded with small, powerful LED, bi- and full colour, lights that run on mini-V Lock batteries – or their own internal rechargeable batteries.
Now, a lot of these products aren’t perfect and have limitations, but no sooner have the early adopters complained about the faults in the short-run first batch than the manufacturers have altered the design and fixed the issue to make sure the second phase will meet, and often exceed, expectations. We can now have Lidar AF systems for manual lenses, autofocus anamorphics, cheap gimbals so good you’d think the footage was recorded with a drone – even lighting stands, heavy duty tripods and rigging gear are getting cheaper and better at the same time.
Of course all this is great news for low-budget productions, students and those just starting out, but it also means anyone can believe they are a film maker. With the demand for video across social media platforms and websites ever increasing you’d think that would be great news for the industry, but much of that demand is being eaten up by those with no formal learning and some clever kit. Not all the content looks or sounds very good, but often that matters less than that a tiny budget is kept to. Those who think they are film makers can easily convince those who can’t imagine what their film could look like, that they are.
I expect 2025 will bring us more of this – better, more advanced and easier-to-use kit at lower prices, and more people using it. I didn’t need to consult my crystal ball for this prediction. Every year has brought the same gradual development since I joined the industry 28 years ago, but once again it has taken us to places I really hadn’t expected. I expect to be surprised again.
I found 2024 to involve a lot of waiting. Waiting for the industry to right itself, waiting for the latest AI tool to come out of Beta, waiting for companies to put stability before bells and whistles. That, I fear, may be rather a long wait. I also found I had very mixed feelings about AI – on the one hand I was excited to see what advances technology could bring and on the other hand saddened by the constant signs that profit is put before people – whether in plagiarising artists’ work or the big Hollywood studios wanting the digital rights of actors.
Generative AI impresses me whenever I see it – and I think we have to acknowledge the rate of improvement in the last few years – but I also struggle to see where it can fit in my workflow. I am quite looking forward to using it in pre-production – testing out shots before a shoot or while making development sizzles. To that end, it was great to see Open AI’s text to video tool Sora finally come into the public’s hands this month, albeit not in the UK or Europe. Recently Google’s Veo 2 is getting hyped as being much more realistic, but it’s still in Beta and you have to live in the US to get on the waiting list. Adobe’s Firefly is also waitlist only – so there’s more waiting to be done – yet it could well be that 2025 brings all of these tools into our hands and we get to see what people will really do with them outside of a select few.
On the PC hardware front, marketing teams went into overdrive this year to sell us on new “AI” chips. Intel tried to convince us that we needed an NPU (neural processing unit) to run machine learning operations when there were marginal gains over using the graphics card we already had. And Microsoft tried to push people in the same direction – requiring specific new hardware to qualify for Copilot+. Both companies are trying to catch up with Apple on battery life, which I’m all for, but I wish they could be more straightforward about how they presented it.
I continued to get a lot out of the machine learning based tools, whether it was using a well trained voice model in Eleven Labs or upscaling photos and video with Topaz’s software. I also loved the improvements that Adobe made in their online version of Enhance Speech which rescued some bad audio I had to work with. Some of these tools are starting to mature – they can make my life easier and enable me to present better work to my clients which is all I want at the end of the day.
For me 2024 was met with lots of personal life challenges which has precluded me from a lot of deep dive involvement into the world of AI to the level I did in the previous 2 years, but I did manage to catch up on some of the more advanced generative AI video/animation tools to explore and demo for the CreativePro Design + AI Summit in early December. I created the entire 40 minute session with Generative AI tools including my walking intro chat and the faux Ted Talk presentation, using tools like HeyGen, ElevenLabs, Midjourney and Adobe After Effects. As usual, I did a complete breakdown of my process as a reveal toward the end of my session and will be sharing this process in a PVC exclusive article/video in January 2025.
The rate of development of AI animation/video tools such as Runway, Hailuo, Leonardo and others that are developing their text/image to video tools is astounding. I think we’re going to see a major shift in development in this area in 2025.
I’m also exploring music and audio AI generative tools including hardware/software solutions in the coming months and expect to see some amazing growth in quality, production workflows and accessibility to the public who are virtually non-musicians.
As usual, I’m only exploring the tools and seeing how they can be utilized, but also am concerned for the direction all of this is heading and how it affects content creators and producers in the end. I always take the position that these are tools we can utilize in our workflows (or at least have an understanding of how they work) or choose to ignore them and hope they’ll just go away like other fads… which they won’t.
Christmas morning is a flurry of impatience; usually an explosion of expectation, surprise, and wrapping paper scraps. I used to describe NAB as “camera Christmas” to newcomers. But with announcements coming by email and NAB turning into primarily events, meetings, and conversations, that giddy elf feeling I used to have to see the new floor models has turned into excitement to see familiar faces.
So where has our impatience shifted? It seems we now find ourselves in a waiting game for presents in 2025.
That new Adobe Gen-AI video model? Hop on the waitlist. Hoping to see more content on the Vision Pro, perhaps with the Blackmagic URSA Cine Immersive? Not yet. Excited about Sigma making RF lenses? They started with APS-C first.
Patience is not one of our virtues. With shorter camera shelf lives and expected upgrade times, we assume we will hold onto our gear shorter than ever and are always ready for a change. Apple’s releases make our months-old laptops seem slow. A new AI tool comes out nearly every day.
Video editors are scrambling for the few job openings, adding to their skill sets to be ready for positions, or transitioning to short term jobs outside of video alongside the anxiety of AI threatening to take over this realm. We rejoiced when transcription was replaced by a robot. We winced when an AI program showed it could make viral-ready cuts.
Just because we are forced to wait does not mean we are forced to be behind. It is cheaper than ever to start a photography journey. Mastering the current tools can make you a faster editor. Teaching yourself and others can help create new stories. While I personally don’t fully believe in the iPhones filmmaking abilities, there ARE plenty of tools to turn the thing-that’s-always-on-you into a filmmaking device.
In 2024, we were forced to wait. But we are not good at waiting. That’s the same tenacity and ambition that makes us good at storytelling. It’s only a matter of time. It’s all a matter of time. So go forth and use your time in 2025.
For me, 2024 was a case of ’the more things change, the more they stay the same’. I had a busy and productive year working for a range of clients, some of whom I hadn’t worked with for many years. It’s nice to reconnect with former teams and I found it interesting that briefs, pitches and deliveries hadn’t changed a great deal with time.
The biggest change for me was finally investing in a new home workstation. Since Covid, I have been working from home 100%, but I was using an older computer that was never intended for daily projects. Going through the process of choosing components, ordering them and then assembling a dedicated work machine was very rewarding, and something I should have done sooner. Now that my home office has several machines connected to a NAS with 10-gig ethernet, I have more capacity at home than some of the studios I freelance for – something I would have found inconceivable only a few years ago!
Technically, it seems like the biggest impact that AI has had so far has been providing a topic for people to write about. Although AI tools continue to improve, and I use Ai-based tools like Topaz & Rotobrush regularly, I’m not aware of AI having had any impact on the creative side of the projects I’ve worked on.
From my perspective as an After Effects specialist, the spread of HDR & ACES has helped After Effects become increasingly accepted as a tool for high-end VFX. The vast majority of feature films and premium TV shows are composited with Nuke pipelines, but with ACES having been built-in to AE for over a year, I’m now doing regular cleanup, keying and compositing on projects in After Effects that wouldn’t have been available to me before.
Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop Now