Tapeless Cinematography Workflow v2
By vittiPhoto | August 07, 2008
Being an early adopter of tapeless cinematography, I generally notice these inquiries on the blogs, bulletin boards, listserves and forums. Tapeless being the new technology and work?ow, there are kinks to work out, learn curves to scale. This article will address the issue of tapeless cinematography?s archiving image and sound assets with a reasonable yet secure system.
Film set the narrative and commercial image acquisition standard over the past century, but the process from set to edit can be prohibitive. Since the late 1960?s, analog tape has been used for broadcast, corporate communications, with the emergence of digital tape & disc formats, for almost every media industry. Both ?lm and tape provide instant archival mediums for their video, audio and metadata assets, but with the domination of non linear editing systems across all media industries, these formats require digital capture of these assets into the digital realm. Tapeless image (and audio) acquisition is emerging on set as an alternative to ?lm, tape and optical disc based recording. Solid state?s current high cost necessitates using it as a temporary archive analogous to a ?lm camera?s magazine conveying assets to a more affordable archive: computer hard disk drives.
New tapeless cinematography systems abound from traditional sources like Sony, Panasonic, JVC, traditionally ?lm based camera manufacturers Arri and Panavision, and new players like Red, Silicon Imaging and Phantom to name a few. Parallel to hardware developments are the video format and codec developments. These developments indicate the multiple paths being explored in camera and solid state form functions, video formats and codecs, and post work?ow.
The following is a case study of a multi-camera, tapeless feature production "Blackout" shot on two Panasonic AG-HVX-200?s, using a nonlinear edit system (in this case, Apple?s Final Cut Pro) for syncing, dailies assembly and eventually, picture edit to illustrate some of the bene?ts of tapeless acquisition to digital post.
Fundamentally, storytelling assets take the form of image, sound and metadata recorded onto a medium. The medium can be ?lm, tape, optical disc based or solid state non-volatile memory. Digital assets may be further de?ned as uncompressed or compressed, the latter requiring a codec to access the images. Metadata might be in the analog form of keycode, footage/frames and timecode for ?lm. Digital metadata may include user bits, timecode and other shot, production or camera data. Metadata may be also recorded visually in the form of a clapper slate relaying production, scene, take, camera reel and date information.
"Blackout" was shot in the summer of 2006 with a $400,000 budget and goal for theatrical release. The production was based on multi camera, dual system sound coverage. The script supervisor controlled scene, take, director?s circle takes and assigned reel numbers in sequential order. The Camera Department controlled the slate and image recordings at 720 24pn onto P2 cards, the Audio Department provided audio recording onto CF cards.
The medium dictates how we access the recorded assets. Film and tape are generally linearly accessed in real time, played down and captured as assets into your system or ?lm is transferred to tape then captured. Assets on optical disc and solid state devices can be accessed non linearly. Film, tape and optical disc create instant, relatively long term archives at the point of capture. Solid state devices comprised of non volatile computer memory currently at the $50/GB may not be cost effective as the archival medium*. Developing a disc based archive for your assets, one that is cost effective while setting up your project for post, mitigates the effort while providing a redundant data set that can survive device and human error as a near term archival solution.
If media type dictates capture process, then video format describes the computer and storage subsystem. Any video format contains pixel dimension, frame progression, metadata and the raw encoded video and audio ?les, usually lossy or losslessly compressed. With video codecs, there?s no such thing as a free lunch. The video compression is a tradeoff between disk space, video quality and the hardware to decompress it. An uncompressed video format requires large drive arrays to achieve the volume and speed needed, or a compressed video format requires more CPU power to decode producing a comparable image. Compression may employ lossy (or visually lossless) techniques that may reduce the quality of the image and require more CPU during playback, but make playback possible through a modest computer system. When specifying a computer system, consider the video and audio streams and don?t overlook the required system overhead, anticipated graphics needs or the number of simultaneous video streams.
Using the DVCProHD video format, the production speci?ed the deliverable to be 720x1280 at 24 progressive frames per second. This codec requires approximately 40 mbps, easily provided using an Apple G4 667MHz Powerbook and Firewire 400 external hard drive to provide 2+ streams for video for assemblies.
Using high de?nition tape as the benchmark cost per unit time, until solid state storage reaches the tape comparable $1/minute price point, the following alternative is suggested for any tapeless project: choose a metadata key, copy the assets as DATA from the solid state medium to a mirrored array (safety through drive redundancy), then log and transfer into your NLE onto a striped array (speed through drive redundancy) as your MEDIA.
This system builds 2 copies of the assets as DATA on two drives to protect against device failure and a third copy of the rewrapped assets as your project MEDIA on a separate volume, generally a striped (fast) array. This is roughly the equivalent of a negative and work print or dubbing shot tapes and digitizing the medium into your NLE. If production insists on individual external drives, P2 Genie can automatically copy the card to multiple volumes.
Since production used a timecode clapper slate for syncing video and audio and the assets to the story with a traditional scene/take/reel coding system, the natural link was unique reel numbers where each reel was attributed to a P2 card load. Reel numbers became the key ?eld that tied the assets (DATA and MEDIA) to the script.
- current laptop**
- IEEE 1394a/b based RAID 1 mirrored array
- IEEE 1394a/b based RAID 0 striped array
- 17-21" monitor
On "Blackout", the HD Tech station was setup in the holding area located in a church community room, a few blocks away from the set in a secure, stable environment. When the company moved off site, the setup took over a corner inside the location vehicle, a modi?ed RV used for still photo productions with a generator. In either case, runners would bring shot & write protected (locked) P2 cards to the HD Tech station. Insert the P2 card into the reader slot, check the reel number on the slate, copy the card contents to the asset archive into a reel named folder on the RAID1 mirrored array, then transfer into the NLE and onto the RAID 0 striped array using the assigned reel number. Audio followed video. At the end of each shooting day, cards were copied to the DATA archive, then transfered to the MEDIA array.
Get articles like this in your inbox: Sign Up