Suggested workflows when dealing with non-square pixels and anamorphic formats.
By Chris and Trish Meyer | February 18, 2008
No matter which workflow you choose, always make sure your source footage has been tagged with its true pixel aspect ratio - this is the only way your software will know what to do with it in order to keep you out of trouble.
For a variety of arcane technical reasons (trying to record NTSC and PAL on the same tape, cutting corner on data throughput, being compromised by camera sensor technology of yesteryear, etc.), virtually all digital video formats have non-square pixels. This means they must be projected in a way that stretches or squashes them on playback to properly fill the television screen. Unfortunately, a side effect of this is that they will also look odd on a computer screen. When all you do is send the digital signal from camera to tape deck to switcher to monitor, this is neatly hidden from you. But when you start working with digital video inside a computer, you have to deal with these misshapen pixels.
As a result, a common question is what is the best way to work with these pixels: Stretch them back out to being square? Or leave them in their native format? The answer depends on what your primary goal is in life: preserving maximum image quality, or preserving your own sanity.
The Maximum Original Image Quality Route
If your primary goal is to preserve the original image as much as possible in order to make sure it cuts back in with similar sources without detection, then you should work in the native pixel aspect ratio of your source footage.
For example, if you are working with 1080 line HDV footage which uses a frame size of 1440x1080 pixels, set your compositions, sequences, or timelines to use 1440x1080 with a pixel aspect ratio (PAR) of 1.333:1 - or more accurately, 4:3. This will mean the original footage should pass through without being stretched to 1920 pixels wide and then later squashed back to 1440.
If you plan to use any additional graphics such as text overlays, digital photos and the such, you have the option of creating them at either 1440x1080 with a PAR of 4:3 or 1920x1080 with square pixels. As long as you have the PAR tagged correctly, your software will squash as needed to make your new sources conform to the size of your composition.
In the past, you couldn't always expect software to factor the PAR into every calculation. As a result, sometimes effects would be processed incorrectly (meaning circular gradients would end up looking like eggs, etc.). For this reason, we tend to use the square pixel size for our additional graphics just to be safe. Software is smarter these days, but old habits die hard - especially when they were born out of being bitten. As long as we tag these sources as having square pixels, our software should scale them down to fit the frame size of our compositions or timeline. Since the images are being scaled horizontally instead of vertically to fit, we don't have to worry about this scaling introducing field flicker.
The problem with this workflow is that your original sources, compositions, and sequences may look odd on a computer screen. Most programs have a way to correct for any PAR distortion in their display, but this correction has the side effect of either reducing the displayed image quality - for example, After Effects uses a crunchy nearest neighbor algorithm to stretch the pixels - or consuming more CPU cycles to display smoothly. (If this trade-off seems worth it to you, on the last page we'll show you the secret switch which enables it in After Effects.)
The better solution is to preview your comp or edit window out through a true video chain on a real video monitor - this way, you will see the image after it has been stretched by an actual delivery chain. The additional benefit of this workflow is that you will see any color space shifts or safe area restrictions in the real world, on a real monitor.
Get articles like this in your inbox: Sign Up