As this series on After Effects and Performance draws to a close, there are a few loose ends to tie up. The intention has always been to conclude this series by interviewing the After Effects development team, and while that’s being arranged it’s a good opportunity to recap everything to date, and fill in a few details. This series has become much larger than I expected. Initially it was going to be a single article about the difficulties of choosing a CPU for After Effects – and since then it’s grown to cover many other topics. It’s been a few months since I posted the last article, and during that time I’ve been compiling them into a single download (coming soon). Proof-reading everything has helped me spot a few things that have slipped between the gaps.
So apologies if this article is a little erratic and disjointed, but it’s a collection of bits from the cutting room floor, and other stuff that didn’t fit in earlier articles.
What’s it all about?
While this series has covered a broad range of topics, one thing I haven’t covered is purchasing advice. In fact this series is almost anti-advice. The main point I made in part 1 was that After Effects is used for many different things, and different types of animation work use different parts of a computer. A computer based on a given set of components may work great for person A, but not be so great for person B. If you have a set budget (most people do) then it’s not feasible to recommend hardware components that will suit everyone. Also, the information will date very quickly. There are really only two pieces of hardware advice that I feel strongly about – firstly, always work from an SSD drive and secondly, never buy a Quadro GPU specifically for After Effects work.
If we re-visit the variety of After Effects users listed in part 1, I’ll clarify the system components that will affect their type of work.
Motion Graphics Designer, 2D style.
The first After Effects user I listed is a hypothetical motion graphics designer, who primarily uses AE to generate 2D elements. The main tools used in AE are text layers, shape layers, solids and plugins. Colour management isn’t an issue, as all projects are viewed online, and all work is done in 8-bit mode.
For this user, they’re really only stressing the CPU. They’re not using any 3D software and so a GPU won’t impact their performance. RAM and disk cache will help with their workflow, but won’t affect rendering times. System and network bandwidth won’t be an issue either, as After Effects is generating most of the elements and not loading footage from disks.
As detailed throughout the series, After Effects doesn’t currently take advantage of multi-core CPUs, so the best choice would be a CPU that has a high clockspeed, even if it doesn’t have many cores. The good news for these users is that they don’t have to spend a huge amount of money to get the best performance possible – they don’t need monster GPUs or high-end multi-core CPUs. The biggest danger they face is overspending on expensive components that won’t make any different to AE.
Motion Graphics Designer, 3D
The second AE user I listed is someone who primarily works in Cinema 4D, and uses After Effects more as a tool for compositing and grading than for animation. These users are the most difficult bunch to please, because they’re working across both 2D and 3D software, but 2D and 3D utilize different parts of the system. As mentioned in Part 15, 3D rendering is an extremely unusual example of something that continues to get faster the more CPU cores you throw at it. 3D rendering, as well as certain types of particle simulation, are referred to as “embarrassingly parallel” because there aren’t many other types of software where this is the case. This means that if you have the money, you can keep buying CPUs and GPUs with more processing power and your 3D work will render faster, however After Effects will not.
In many ways this problem was one of the key motivations for writing this series – 3D animators were paying lots of money for powerful CPUs and GPUs to make their 3D renders faster, but After Effects was just as slow as always.
For motion designers who already have powerful machines but are looking for improved performance in After Effects, then hardware options are limited. If you’re using a lot of supplied footage, then make sure it’s copied to a fast local drive – and not being read from a slow external drive or a 1 gigabit network. If you already have a beefy CPU and GPU then the next thing to consider is a dedicated SSD as a disk cache.
Otherwise, I’d suggest looking for performance solutions through workflow and software, and not more hardware. The main avenue here would be 3rd party scripts to render After Effects comps in the background. These will help utilize the full power of the machine, and allow you to continue working in After Effects while rendering at the same time.
Secondly, I’d be looking at workflow to see if changing some of your habits will speed things up. For example – don’t render to PNGs. As detailed earlier, PNGs are MUCH slower to read and write than other file formats. If you don’t need to use colour management, then turn it off altogether. If you do need to work with colour management then make sure your 3D renders match your project working space – sRGB or rec 709. The motion designers I’ve worked with that use Cinema 4D have all used sRGB or rec 709 colour – I haven’t come across any that worked with 32 bit linear projects. However if you’re one that does, see below for tips on using EXR files.
If you’re someone who’s routinely working with 3D and you already have a high-end machine, then the next logical step is to look at setting up your own render farm, and choosing a solution that can work with After Effects as well as your 3D software.
Compositor, childrens TV series
The third AE user I listed is someone working on a hypothetical children’s TV series, keying out HD ProRes footage and compositing it onto 2D backgrounds.
As with our first user, there are no 3D elements involved so there’s no point wasting money on a powerful GPU. Assuming we don’t have an unlimited budget, once again we’re better off with a CPU that has a high clockspeed rather than a high number of cores.
The main thing for our user to be careful of is that the ProRes files they’re working with are on a local SSD. As detailed throughout the series, a lot of the time After Effects is rendering, it’s just shuffling lots of data around the system. An external USB hard drive can be much slower than a fast internal SSD. Investing in a dedicated SSD and copying files locally may result in a dramatic speed improvement.
If the supplied ProRes files are in rec 709 colourspace, then it might be possible to work with colour management turned off for a small boost in speed. But if they’re in a different colour format (eg Arri LogC) then pre-rendering them to rec 709 will make working with the project much faster. In this case, it’s a workflow step that will improve overall performance, not a hardware purchase: set up all the chromakeys using the LogC source files, and pre-render them to a ProRes 4444 with alpha. Working with the pre-keyed ProRes file, now in rec 709 colourspace, will be dramatically faster than working with the original LogC files with a keying plugin applied in the main composition.
Compositor, TVCs & feature films
The forth AE user I listed is one who reflects my own typical projects, compositing high resolution plates from multiple sources, in 32 bit mode. Again, this is a scenario where a GPU will not make any difference to rendering times as there aren’t any 3D elements in the workflow. The main issue here is bandwidth, dealing with lots of large layers.
Again, a CPU that has lots of cores will not be as fast as a CPU with a higher clockspeed, but in this scenario the slowest part of the system will probably be the network. Unlike our Cinema 4D user above, who renders locally, in this case we’re looking at a team effort with separate 2D and 3D departments – working over an office network with shared network storage.
Improving performance for these types of projects mostly involves improving bandwidth. This can include upgrading to a 10-gig Ethernet connection if possible, and using a large, dedicated SSD as a disk cache. If the office network and shared storage is slow enough to impact performance (it probably is) then the next step up is to look for 3rd party software that will automatically mirror the files on the server to a local SSD drive. This might take some research and effort to set up, however the performance gains can easily be much greater than buying a more expensive GPU.
When working with large resolution files we want as much RAM as we can afford – 64 gig minimum – so our workflow can benefit from cached frames and layers. Storing frames in RAM is thousands of times faster than loading a frame over a regular 1-gigabit Ethernet connection.
When working with 32 bit 4K files everything is going to be relatively slow, but we can also look beyond the computer hardware for other ways to improve performance.
If you’re working in 32 bit linear mode then After Effects will start to get sluggish anyway, but you can improve things by rendering (and pre-rendering) to 32 bit linear EXRs. As detailed in part 14, working with footage that’s in a different colourspace to your project settings will slow things down, as additional colour processing is required for all frames and all layers. But if your footage is in the same colourspace as your After Effects project then no colour processing is needed, and so everything is faster. EXR files are a 32 bit linear format, so if your After Effects working space is 32 bit linear then there’s no colour conversion needed. EXR files will be significantly faster to load than TIFFs, PNGs, JPGs, or anything else in sRGB or rec 709. EXR files in After Effects have had problems in the past (all fixed now), so you might be surprised to know that you can get a significant performance boost by using them instead of TIFFs and PNGs.
If you’re already rendering to EXR sequences, then I’ve found that having multiple passes as individual files can help improve performance, as opposed to having a single EXR with multiple channels embedded.
For the same reasons, if you have source footage that’s in a RAW format, or a proprietary Log colourspace (eg R3D, LogC, Slog, Clog, BRAW) then pre-rendering those files to an EXR sequence is an easy way to significantly improve final render times. R3D and BRAW files can be especially heavy, so pre-rendering them to EXR sequences can improve your workflow dramatically.
At this level it’s also worth exploring 3rd party plugins which are faster (and better!) than the native Adobe plugins. The examples I’ve used before are Neat Video’s denoise plugin, and Frishluft’s Lenscare plugins. Both are not only much faster than the Adobe equivalent, but they look better as well.
Character Animator
The final example is someone who uses AE for character animation. From a hardware perspective, this is the same as our first example. Performance should be pretty good compared to those using live action footage, as bandwith isn’t a major issue when working with Illustrator files.
This user is a good example of someone who may not be able to significantly improve their rendering times by upgrading hardware, but they may find a range of 3rd party scripts that can help improve their productivity. AEscripts+aeplugins are constantly releasing new tools every week that can help automate repetitive and tedious tasks.
CPU Pro 9000 ultra
One of the most difficult parts of choosing which hardware to buy is working out what all the sales terms mean, especially when competing companies use different terms for the same thing.
In Parts 5 & 6 I outlined the basic history of CPUs, and how performance has improved over time. We’re at a stage now where a single chip contains multiple CPU cores, with the main choice for consumers being Intel or AMD. Soon, Apple will provide a 3rd option as they continue to roll out their own CPUs, and software is adapted for them.
There are a few details that slipped through the cracks in the earlier parts, mostly to do with CPU features and how they relate to After Effects.
While a CPU is usually associated with the raw processing power of a computer, picking one model over another isn’t just about processing speed. The choice also affects a few other system components – most notably RAM and PCI busses. Lower-end CPUs have limits on the amount of RAM they can use, and also support fewer PCI expansion lanes than their premium counterparts.
While After Effects doesn’t take full advantage of multiple CPU cores, it does love RAM, and it’s heavily bandwidth dependent, so a cheaper CPU might limit your options for expansion in the future.
While the exact numbers will change every year (and these might be out of date already) a few years ago the Intel i3 CPU range only supported a maximum of 32 gig RAM, while the i5 topped out at 64 gig RAM. While you can still get plenty of work done in After Effects with 32 gig of RAM, if you’re buying a system that you’d like to expand in the future then check what the limit is before buying. Right now, it’s not ludicrous to consider machines with 128 or even 256 gig of RAM, but if you want that much you’ll need a CPU to match.
X marks the spot
While Intel’s CPU lineup is pretty complex, it does have 2 clear product lines – the Core line and the Xeon line. The Xeon range is generally more expensive, and often described as “professional” – so are they worth it for After Effects? This sounds a lot like the question we asked about Quadro GPUs, and the answer is the same – No.
There are plenty of articles online that will explain the pros and cons of Xeon processors, but when it comes to After Effects they do not deliver more performance for the price. The also use ECC Ram, which is more expensive than the conventional RAM used by the Core line of CPUs, and they consume more power and produce more heat.
All Xeon CPUs have an Intel feature called “Hyperthreading”, which makes the CPU appear as though it has more processing threads available than it does in reality. How this works and why it’s useful is the topic of many online articles, but the simple answer is NO, this does not usually make After Effects faster.
As detailed throughout the series, After Effects does not utilize multiple CPU cores – real or virtual – and so hyperthreading is not an obvious benefit. Because After Effects relies more on system bandwidth than CPU processing, Hyperthreading presents exactly the same problems as early efforts at multiprocessing, where you have multiple CPU cores fighting over the same limited resources, and everything becomes slower overall. Some high-end Core processors also have Hyperthreading, or “virtual cores”, but you’ll want to experiment with it to see if it makes any difference to the types of projects you’re working on. In some cases it can make a small improvement, but anecdotally, all AE users I know have found they get faster performance with it turned off.
What’s the story, motherboard?
Although I briefly mentioned the motherboard, aka logic board in Part 4, I didn’t go into a lot of detail that’s specific to After Effects. Motherboards are a little bit complicated in the same way that RAM is complicated, which was the topic of part 9. As I said in Part 4, if you’re buying a Mac then you just take whatever Apple give you. But if you’re looking at Windows systems then you get to shop around and choose your motherboard as well as your CPU.
When people discuss the “speed” of a computer, they’re nearly always referring to how fast the CPU can process calculations. Computers are basically built around the CPU, and as mentioned in Part 5, for many people the CPU is the computer. With this strict definition, then the motherboard does not have a direct effect on the CPU’s speed. Choosing one motherboard over another will not change your Cinebench score. But, like RAM, there’s more to it than that…
Each CPU is designed around a particular shape and number of pins, and they will only fit into a motherboard that has a matching socket. Intel change the sockets more often than competitor AMD, which means you can’t always swap out your existing Intel CPU for a newer model, as the newer version might not fit into the same motherboard. Each new generation of CPU usually has a new generation of motherboard socket, and motherboard chipset to match. Like all system components, motherboards come in a range of prices with a range of features – from less than $100 up towards $1000.
A more expensive motherboard will not make your CPU process calculations any faster than a cheaper version, so in one sense it’s correct to say that the motherboard doesn’t affect the computer’s speed. However the range of features that different motherboards support can definitely have an impact on overall system performance. Part 9 looked at how increasing RAM can improve productivity in After Effects without improving actual render times, and motherboards are similar – the different features can affect overall productivity without making the CPU itself faster.
So what’s the difference between a $50 motherboard and a $900 motherboard? It’s mostly down to bandwidth and expandability. And coloured LEDs, if you’re into that sort of thing.
The easiest way to expand the capabilities of a computer is by plugging in things, either by USB slots or internal PCIe cards. If we want to add fast SSD hard drives then we’re looking for M.2 slots. If we want multiple GPUs then we’re looking at PCIe slots, and if we want the fastest external peripherals then we’re looking for USB 3.0 and 3.1 ports. Both USB and PCIe come in a range of types and speeds, with each newer version being faster. Thus, PCIe 3 is faster than PCIe 2, and PCIe4 is faster than PCIe3. The more PCI lanes available and the faster they are, the more expensive the motherboard.
A recurring theme of this series is that After Effects spends a lot of time shuffling data around, and so performance can depend more on system bandwidth than raw CPU processing power. While choosing one motherboard over another won’t impact raw CPU performance, it can dramatically affect the speed that data is shuffled around – and this WILL be reflected in the render times of certain After Effects projects.
When Sony launched the Playstation 5, one of the stats which caught a lot of attention was the speed of the internal SSD. By using a new PCIe 4 bus, Sony claim that the SSD can transfer up to 9 gigabytes of data per second. By comparison, an old USB 2 hard drive may only transfer around 50 megabytes per second. This is a huge difference – the USB drive is roughly 180 times slower. Let’s assume we have a 9 gigabyte ProRes file, then in theory the Playstation’s SSD could copy it in 1 second, while it would take about 3 minutes to copy from an external USB 2 drive. The speed difference isn’t just reflected in copying files, but in everyday usage. The “Brady Bunch” title sequence example I’ve used a few times in this series is an example where rendering times are effected by system bandwidth, just as much as CPU speed.
Right now, there are no Intel motherboards available with PCIe 4, but they’re coming soon – and while I’ve mostly steered clear of making purchasing recommendations, I would strongly recommend that anyone looking to build a new system around an Intel CPU should wait until PCIe 4 motherboards become available. Luckily for AMD fans, there’s already a range of motherboards available for the latest AMD CPUs that have PCIe 4.
Many of the complaints about the speed of After Effects have come from 3D users who’ve spent a lot of money on multi-core CPUs and GPUs. These “God Boxes” may be great for rendering 3D animation, but current versions of After Effects simply aren’t designed to utilize that power. Conversely, 3D animation is rarely limited by system bandwidth, so investing money in a PCIe 4 motherboard, with dedicated NVMe SSD drives for After Effects, will improve After Effects performance but not make much difference to 3D rendering.
Caps Lock
Part 14 was about looking at how workflow and settings inside AE can affect performance. Rendering at half resolution is 4 times faster, rendering at ¼ resolution is 16 times faster and so on. I’d intended to include a little note about Caps Lock but it slipped through the cracks.
There are still After Effects users out there – mostly older ones – who will tell you that After Effects will render faster if you have caps-lock on. This isn’t the case anymore, but it’s an interesting case study – there was a time when this did help. So what’s the story?
Firstly, we have to remind ourselves that caps-lock disables updates in the composition window. If you have caps-lock on, then the composition window will display black with a red message bar. Many years ago, when computers were much slower and less powerful than they are now, rendering the composition window preview took a notable proportion of the CPUs power. When you hit “render” in After Effects, you couldn’t do anything else. You just had to sit there and wait for the render, and the composition window would display each frame as it was rendered. But if the composition window was set to a different resolution than the render, for example if your render was set to full resolution but the composition window was set to half resolution, then the CPU had to scale down the rendered frame before it was displayed. In the days when CPU speed was measured in megahertz, this was a noticeable overheard. Turning on caps-lock disabled the composition window preview, so the CPU didn’t waste processing power scaling down preview renders. Hence the rendering time was (very slightly) faster.
These days CPUs are so much faster and more powerful that the composition window doesn’t make any significant difference to render times, and since multi-tasking became normal we’re all used to jumping on the internet or checking email, or doing all sorts of other things while AE is rendering – these things will tax the CPU much more than updating the composition window. So the caps-lock “trick” is just an echo of what things were like 20 years ago. It won’t make any difference to your rendering times, and even when it did the difference was very small.
Real Time 3D Engines
If “performance” wasn’t an issue for After Effects users, this series wouldn’t have been written. But a lot of the angst about After Effects and performance comes from comparisons to 3D – and the incredible leaps forward made by 3D technology over the past decade.
Many articles in this series have addressed the differences between the way After Effects works (compositing 2D bitmaps together) and how 3D works (processing 3D geometry). In the previous article, where I interviewed a bunch of software developers, it was clear that the last decade of graphics advances has mostly involved 3D geometry, with After Effects not getting an invite to the party.
While I think the distinction between 2D and 3D is fairly clear, within the overall world of “3D” the differences between real-time and non real-time are much larger than some outsiders realize. Because this isn’t immediately relevant to After Effects, I’d edited out large explanations from earlier articles. But as the performance angst is still here for AE users, let’s clarify some of the aura around real-time 3D.
Firstly, let’s clarify terminology. Traditional 3D animation does not render in “real-time”. We’re all too painfully aware of this. 3D rendering times can range from several seconds through to several days for a single frame. All of the well known rendering engines that are used with 3D animation software are in this category, regardless of whether or not they support GPU acceleration. Vray, Arnold, Corona, Redshift and Octane are just a few of the 3D rendering engines out there which render 3D images – but not in real time.
Alternatively, there are 3D engines designed from the ground up to produce images in “real-time” – that is, they render frames almost instantly. For many years, the term “real-time 3D” primarily referred to games. As detailed in parts 10 and 11, the development of the modern GPU has been driven by the gaming industry. But real-time 3D hasn’t always been limited to games. Broadcasters have used systems such as Viz to produce real-time graphics for weather updates, sports scores, news graphics and other live broadcast applications. Museums and exhibitions have used real-time 3D engines for interactive kiosks. Two competing companies, Unreal and Unity, dominate the real-time 3D industry although there are many other options available.
While the key difference between “real-time” and “non-real-time” might appear to be rendering speed, they also represent fundamentally different workflow approaches to producing a “final” result. The differences extend waaaaay beyond rendering speed, and encompass all aspects of a production pipeline.
If you’re creating a 30 second TV commercial in Cinema 4D, then it isn’t as simple as thinking “oh, I could render with Vray and it will take 5 hours. Or, I could render it with Unreal and it will be done in 30 seconds”. If only…
Real-time 3D engines such as Unreal and Unity can produce stunning outputs in real-time, but only with an incredible amount of preparation.
A real-time 3D engine can be compared to a Formula 1 car. A Formula 1 car is not simply a normal car that goes very fast. It is a finely tuned machine that has been designed and built to do one thing: race on Formula 1 race tracks. Outside of a race track, Formula 1 cars are pretty useless. The tyres only last for about an hour and you can’t even start the engine without an entire pit crew of mechanics with dedicated equipment. If you’re at a race track, and you have a garage full of support mechanics, then great – a Formula 1 car will go faster than anything else around. But as soon as you take it out of that environment then the performance won’t seem so amazing – you can’t even get the thing started.
In that respect, real-time 3D engines are a bit like Formula 1 cars. They’ve been designed from the ground up to do one thing: render games really quickly. If you have an office full of supporting artists and programmers, then great – you will be able to produce stunning graphics that render in real time. But outside of a gaming environment, real-time 3D engines have a lot of restrictions. A Formula 1 car might look great on track, but you can’t jump in one to get some milk from the supermarket. In the same way, a 3D gaming engine might look stunning when you’re playing a game, but that doesn’t mean you can whack it in Cinema 4D and render everything in real time. Looking at a 3D game and wondering why your Maya scene still takes 10 minutes to render a frame is like looking at a Formula 1 car and wondering why your Toyota can’t go around corners at 300 kph.
From a cost perspective, real-time 3D is not necessarily cheaper than traditional 3D rendering. The rendering itself might be fast, but the additional time, effort and resources required to build a 3D scene that can render in real-time might outweigh the costs saved in rendering time. A typical project that could normally be completed by a single 3D artist might require two or three additional assistants to produce it with a real-time engine.
Fudge Factor Five
Parts 10 and 11 of this series covered the history of 3D graphics, and the development of modern GPUs. Last year nVidia unveiled their latest range of GPUs and created a lot of excitement around “real-time ray tracing”. So what was the big deal?
As mentioned in previous articles, “ray tracing” is one of the algorithms used to create photorealistic 3D renders. Ray tracing is great for rendering specular lights, reflections, refractions and shadows. However it’s always been very slow and computationally expensive.
3D animation, and rendering with slow algorithms like ray tracing, had been around for many years before 3D graphics began to appear in computer games. Quake might have been the first 3D game to incorporate hardware acceleration, but the rendering techniques it used were a very long way from “photorealistic” 3D rendering. Quake was a hit, and definitely a significant milestone in the history of computer games, but it definitely wasn’t ray tracing. In order to be able to play the game at a respectable speed on a home computer, the Quake rendering engine was designed from the ground up for rendering speed – not image quality.
Ray tracing is known for its accurate reflections and refractions, which it creates by calculating the path of individual light beams as they bounce around and interact with objects (hence the name – it’s tracing the path of light rays). However a real-time gaming algorithm doesn’t even come close to calculating these sorts of effects with any sort of reality. Instead, everything is “fudged”. Yes, modern games have shadows but they’re not “real” shadows – they’re fake. Reflections are faked, glass and water effects are faked, and so on.
From the time Quake was released up until the present day, the fudge in gaming rendering engines has become more and more sophisticated, and the results have become more and more stunning. Modern gaming engines haven’t just worked out how to fudge shadows and reflections, they’ve worked out how to fake more advanced lighting effects such as caustics, high dynamic range images, fog, volumetric lights, global illumination, depth-of-field and motion blur, as well as lens flares and lens distortions. While the results can be breathtaking – especially considering that they’re rendering in real-time – the algorithms used to generate them are worlds apart from those used in 3D animation packages. They may look beautiful, but they’re not realistic. The fudge has just been turned up to 11.
Real Time Ray Tracing
For over 20 years there were quiet rumors and whispers of games that were going to have real-time ray tracing. Every so often a new demo would pop up and provide tech journalists with something to write about for a few days, before nothing ever happened. But while there was a lot of excitement in the press about the idea of real-time ray tracing, in reality everyone was excited for the wrong reasons.
Ray tracing is the umbrella term for a collection of algorithms that can generate photorealistic computer graphics. Gamers seemed to be excited about real-time ray-tracing because it suggests that the quality of the graphics will be more realistic – games will look nicer, prettier, and otherwise “better”. However graphics quality was never the driving force behind the quest for real-time ray-tracing, and current gaming engines are so sophisticated that it’s very difficult to see the difference between a ray-traced image and one generated by a gaming engine. The latest 3D games are the result of over 20 years of optimizations and innovation, and the latest games can produce graphics on par with many Hollywood films (and sometimes exceeding the bad ones).
So if the current gaming engines are so good, then why the continued excitement about real-time ray tracing?
The actual answer is development costs.
Although the latest gaming engines produce stunning images, they still rely on a lot of technical fudge to make that happen. Like a Formula 1 car, a real-time 3D engine requires a dedicated support team to prepare everything, just to get it working. Developing graphics for gaming engines takes a huge amount of effort and resources, and when it comes to developing games that equals large amounts of money. While there are many gaming engines out there, all of them rely on various amount of fudge to produce decent quality images – and this can include basic components of a scene, such as lighting, shadows and reflections. The promise of real-time ray-tracing is not superior graphics, but rather a streamlined development process. Because the ray-tracing algorithm calculates “real” lighting, shadows, reflections and refractions, these elements no longer need to be “fudged”, and no longer need an army of supporting developers to prepare. The potential of ray tracing is to remove the fakery, and to vastly simplify the development of games. This might have the side-effect of making the graphics look better, but the more pressing issue is reducing development time and costs.
I can actually share one example from my own experiences. Many years ago I had to prepare a bunch of video files to be played back in a real-time 3D engine. The video files had to be delivered as an image sequence, which wasn’t a problem, but the pixel dimensions had to be a power-of-two. In other words, no matter what size or aspect ratio the video was, I had to squash the video so that the dimensions were a number in the series 1, 2, 4, 8, 16, 32 etc. I was working with HD video at the usual resolution of 1920 x 1080, but the files I delivered had to be scaled to a size of 2048 x 1024, just so the 3D engine could open it. Inside of the 3D scene, the image would be re-sized back to a 16 x 9 aspect ratio so it would still look correct – but it had to be pre-processed and delivered in this specific format.
Pre-rendering a bunch of videos to a different size might not seem like a big deal, but it’s just one example of an asset needing to be specifically prepared for a real-time engine. When it comes to game development, teams of people spend all their time pre-processing and preparing assets.
Developing graphics for games is all about managing budgets – and not just financial ones. There are strict technical limits on all aspects of every graphics element in a game, from the number of polygons a model can have, to the resolution of the texture maps being applied. In many cases, the same textures have to be supplied at a range of sizes and formats, with the engine automatically choosing which one to use depending on how big it is in frame.
In contrast, the non-real time rendering engines used by 3D animation companies are not restricted by hardware. While the number of polygons in a scene will be limited by the amount of RAM in your machine, there’s no practical limit imposed by the software itself. The same is true for texture sizes – and also the number of textures. Using a standard renderer inside of a 3D application does not require assets to be prepared and pre-processed in the same way that a gaming engine does. Basically, using a conventional 3D renderer is much easier, and offers a far greater amount of flexibility and sophistication then a gaming engine. The trade-off is speed.
As graphics hardware becomes more and more powerful, and modern gaming engines continue to produce increasingly stunning images, it’s only natural to look at a game that can render 60 frames of 4K video per second and wonder why Cinema 4D, Maya and other desktop 3D applications take so much longer. The answer is that the rendering engines used in desktop 3D apps are free from restrictions such as polygon budgets, texture memory limits and bandwidth constraints, offer far greater potential, and don’t need extensive pre-processing. A gaming engine might be able to render a stunning frame in a few hundredths of a second, but an enormous amount of time was spent beforehand, preparing assets to make that possible.
RTX emerges
When nVidia launched their 2000 series of GPUs in 2019, the goal of real time ray tracing was finally achievable, although initial support and capabilities were limited. But a year later, with the launch of the 3000 series, real-time ray tracing was finally a reality for gamers – assuming they could find a 3000 series GPU in stock somewhere.
While the driving force behind real-time ray tracing was to reduce the development costs associated with games, the photorealism that comes with ray tracing has made real-time 3D a feasible alternative for film and television production. While “The Mandelorian” has made headlines thanks to its “Stagecraft” production techniques, the technology is currently in its infancy. However the potential to change the way film and television is produced is clearly evident, and only time will tell how current production and post-production techniques will be affected. VFX journalists are already pondering the future of compositing, and while compositing itself will always be a key link in the post-production chain, the specific tasks handed to compositors might change as real-time 3D changes the way visual effects are achieved in camera.
What this really means is that if we consider the topic of “After Effects and Performance” in the broadest possible sense, then we’re not just looking at the speed of rendering a single After Effects composition, but rather we’re looking at the future of visual fx and the role of compositing applications in film and television production.
Overclock your life
For many people the topic of “After Effects and Performance” will be hardware related. Which CPU, which GPU and so on. But throughout this series I’ve touched upon a few aspects of performance that go outside the notion of raw rendering speed. Productivity can be improved through hardware, eg more RAM, SSD cache drives, and also software – 3rd party scripts and plugins.
But there’s still a few other things to consider that can help productivity, even if we’re heading away from hardware benchmarks and towards life hacks. Overall, the work you produce in a given day, week, month or year isn’t going to be defined by the CPU in your machine. When people look at your work they don’t say “hey, check out Bob’s work. He’s the guy with a Core i9 10900K and an nVidia 3080”.
Ultimately, you are the one responsible for the work you produce – and so looking after yourself is just as important as choosing the right GPU. Take some time to think about how you work, how you feel, and what you can do to improve your working environment. Don’t settle for a cheap keyboard. If you use your mouse a lot, look for a comfortable one. A good monitor setup is just as important as a good CPU, so consider if it’s time for something bigger, better, or wider.
If you have a budget to improve performance, you don’t have to spend it all on your computer. There’s not much point in having a fast machine if you’re too exhausted to use it. A comfortable, ergonomic chair can help you maintain focus and avoid muscle cramps. Give yourself time to go for walks. Maybe try shiatsu or relaxation massage. Take up cycling. Buy some noise-cancelling headphones. Treat yourself to an espresso machine, or a smoothie-maker.
Many years ago in a different article, I suggested that joining a gym might bring greater long-term improvements to performance than buying a new GPU. It’s certainly something to consider…
This is part 16 in a long-running series on After Effects and Performance. Have you read the others? They’re really good and really long too:
Part 1: In search of perfection
Part 2: What After Effects actually does
Part 3: It’s numbers, all the way down
Part 4: Bottlenecks & Busses
Part 5: Introducing the CPU
Part 6: Begun, the core wars have…
Part 7: Introducing AErender
Part 8: Multiprocessing (kinda, sorta)
Part 9: Cold hard cache
Part 10: The birth of the GPU
Part 11: The rise of the GPGPU
Part 12: The Quadro conundrum
Part 13: The wilderness years
Part 14: Make it faster, for free
Part 15: 3rd Party opinions
And of course, if you liked this series then I have over ten years worth of After Effects articles to go through. Some of them are really long too!
Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop Now