Last week, Jim Jannard at RED invited me to come down to RED Studios to talk about EPIC and HDRx, RED’s new High Dynamic Range capture mode. I made the trip this past Tuesday, and had the opportunity to sit down with Jim, Jarred Land, and Deanan DaSilva for three hours. Here’s what I learned about HDRx; a report on EPIC will follow.
EPIC HDRx
“The Mysterium-X sensor was designed for HDR in the first place, but the RED ONE doesn’t have the capability to support it, and we sort of forgot about it”, Jim said. “Then along came Alexa, and all this talk of HDR, so we looked at it again in the EPIC.”
Jim and Jarred took a prototype EPIC with EPIC HDRx firmware to Las Vegas on Saturday and shot several clips, including a clip he posted on reduser.net last Sunday.
If you haven’t seen this clip yet, grab it now (suggestion: open QuickTime Player or QuickTime Player 7, use File > Open URL…, and load http://red.cachefly.net/17hx.mov).
Compare it to a conventional still-camera photo of the same location here, and look at the difference in highlight rendering, especially in the “Binion’s” signs and the KENO CRAPS ROULETTE along the edge of the building. Then play the clip, and (once you’re done watching the motion), leave it parked around the nine second mark. Observe the rendering of the LEDs in the “don’t walk” sign, and the incandescent bulbs on the canopies of the Gold Rush casino on the left side of the picture.
Here’s the reduser thread about the clip: http://reduser.net/forum/showthread.php?t=49940
Here’s the thread about EPIC HDR in general: http://reduser.net/forum/showthread.php?t=49668
This sample clip is flat, lacking in contrast; it’s probably not something you’d use directly. The point of it is to capture as much dynamic range in the image as possible, so you have more freedom in grading to play with the tonal scale as you wish.
What RED has done is come up with a way (currently the subject of a patent application, so I cannot be more specific) of getting “two different exposures that are conjoined”, a normal exposure to capture the bulk of the tonal scale, and a much shorter exposure to capture highlight detail that would otherwise be blown out. These two exposures can be combined in-camera, which RED calls “EasyHDR”, or stored as two separate streams for combining in post with more control: “HDRx”.
EPIC HDRx can be set for +3, +4, +5, or +6 stops of additional highlight capture. At +3, which is what the Vegas clip was shot at, that implies an effective exposure time of 3 stops less for the highlights image: if you’re shooting 24fps with a “180 degree shutter”, your main exposure time will be 1/48, and your highlight exposure time would thus be 1/384 second.
The two exposure are then combined using a fixed tone mapping, so you don’t get any sort of exposure pumping, or local-adaptation edging or haloing, as the shot progresses. Different tone maps with different “crossover points” and mixing profiles will likely be selectable both in camera (for EasyHDR) or in post (for HDRx post-processing). RED’s reclusive, clever Canadian boffin, Graeme Nattress, is actively working on the tone mapping methods; what was used in the Vegas clip and what I saw in LA was only his first cut, and Jim said that he was expecting updated software later that day.
If EasyHDR in-camera combining is used, the image will look like the Vegas clip as far as motion rendering is concerned: the two images are combined as-is, tone mapped but otherwise unprocessed (and Graeme’s improved tone mapping will be available in-camera as well as in post; a Vegas clip shot next Sunday may look better than the Vegas clip posted last Sunday). Looking at the clip, paused at the nine second mark, there’s a short, sharp highlight image, followed immediately—without any gap—by a normally-blurred main exposure. As the two exposures are merged prior to recording, there’s no increase in data rate for EasyHDR.
Detail from the h.264 1500×750 Las Vegas clip at the nine second point.
Observe the LEDs in the “don’t walk” sign and the bulbs on the canopies: there’s a dim, motion-blurred streak for 1/48 sec, with a sharp, 1/384 image at the trailing edge of the blur: visually similar to the effect of a long exposure on a still camera with a “first curtain” flash, a flash fired at the beginning of the exposure. It may seem backwards at first, but remember that the bulbs are moving to the right in the image, thus their blurs precede their sharp images: the reverse of the standard cartoon convention, in which the blurs of a fast-moving subject trail their sharp leading edges.
I asked Jim if it would be possible to reverse the order of exposure, to simulate the look of a “second curtain” flash, in which the blur follows the sharp image instead of preceding it, giving us nice still frames that match our comic-book-driven expectations. He gave me a pained look, and agreed that it would be nice, but it wouldn’t be in the first release, and it would be a “science project” to see if it could be made to happen at all.
If you defer exposure combining to post-processed HDRx, the two images are stored as separate tracks in the same REDCODE file. Yes, this stores twice the data, and may require up to twice the data rate if one is to maintain the same compression levels (the highlight track will use different encoding parameters to optimize the data rate, so it usually won’t be twice as much). Even so, EPIC provides REDCODE 50 and REDCODE 100 (RED ONE normally shoots in REDCODE 28 or REDCODE 36), so this isn’t an insurmountable issue.
In HDRx, one may chose a straight combination of the two exposures, just as EasyHDR does; RED calls this “Magic Motion”. It’s quick and simple.
Alternatively, one can choose MNMB: More Normal Motion Blur. MNMB uses optical-flow analysis being developed in cooperation with The Foundry, a UK-based developer of high-end VFX software (does “Nuke” sound familiar?), to motion-blur the highlights image to match the normal image before combining them. The net effect should be a final image that looks just like a normal, 1/48 sec motion-blurred image, just a very flat one with greater-than-normal dynamic range. It will likely be a bit slower to render than “Magic Motion”, and it’s definitely too complex and slow a process to perform in real time in the camera itself.
I wasn’t shown a MNMB image (I don’t think the software is done yet), but I have high hopes; The Foundry writes very clever code. They have both the sharp highlight image available for precise motion tracking and the blurred normal image to show them exactly what motion path to follow between frames; they can use spatial cross-correlation to refine their post-blurred highlight image before merging, so I expect the result will look very “normal” indeed.
(Those same advantages may be a great boon for VFX compositors: if you have a sharp image for motion tracking, you can pull a more stable track with less bobble and jitter, requiring less manual cleanup. I can see capturing the highlights pass for tracking alone, even if you only use the normal exposure in the actual image. HDRx will allow that workflow.)
What I did see was the raw REDCODE 50 file from the Vegas shoot (along with several other clips shot in Vegas and at RED Studios) brought into Graeme’s in-house development software, RAND, which Jim said stood for Research ANd Development (and Jarred described as Research And Nattress Development!). RAND looks like a version of RED Alert! on steroids, but built with the same UI toolkit as REDCINE-X.
Jim stepped through the raw clip; in REDCINE-X it appears as alternating bright (normal) and dark (highlight-exposure) frames, a display mode I found fascinating, and I hope will be available in whatever processing tool winds up being released. RAND shows each stream individually, or as the combined HDRx image, and it has some limited tools for tweaking the HDR tone mapping. RAND outputs TIFF files, and Jim decided to spit out TIFFs at half-res (2.5k pixel width) to load into their theater playback system, a DVS Clipster, for playback on their 4K-capable Sony SXRD projector.
Jim, Jarred, and Deanan all expressed some nervousness about doing this with me present; they hadn’t looked at this clip (or any other HDR clips) on the big screen before: they only shot them on Saturday, rendered a low-res copy for the web on Sunday, and were swamped with other tasks on Monday, so they hadn’t had the chance. They weren’t sure it would hold up well, but they decided to go ahead and do it anyway.
It took about fifteen minutes to render half-res TIFFs of this 00:12:18 shot (remember, this is development code; it hasn’t been optimized in any way, shape, or form), after which Jim put them on an SSD he borrowed from Jarred’s desk and handed off to Deanan. Deanan dumped the frames into Clipster, which took a while; I was busy poking and prodding an EPIC at the time, so I didn’t time it precisely. Eventually it was done, and all four of us walked over to the screening area at the other end of Stage 4 and watched playback.
It looked just fine.
Let me clarify that. It looked better than it should: it looked normal.
When you look at a still frame from a “Magic Motion” or “EasyHDRx” clip, with the blurred normal exposure and the sharp highlights exposure, you might expect that playing that clip at 1x would give you a cross between normal blurry motion and short-shutter, staccato, pixilated motion—the “Gladiator” or “Saving Private Ryan” look.
It didn’t look like that at all. It looked just like a regular, 1/48 sec exposure, with perfectly normal motion blur. I squinted and stared; I tried keeping a fixed gaze as well as following the motion; I stood right up close to the screen; I stood back where the screen “only” spanned 90 degrees of arc; I stood back at a normal viewing distance. We watched playback of the looped clip for about a minute, then I asked Deanan to pause the clip on a motion-blurred part, around nine seconds in, just to verify that the clip had the sharp/blurred “Magic Motion” look, and they hadn’t snuck in a MNMB clip on the sly (they hadn’t). For whatever reason, my eye and brain “read” the moving clip as a perfectly normal image, not a staccato “Saving Private Ryan” battle scene with a bit of low-key blur mixed in.
It’s actually a bit disturbing how well it works.
Jim had the impression of greater sharpness than he would otherwise expect to see. I wasn’t sure about that, myself; I couldn’t say (without doing proper side-by-side tests) that the brain (or at least, my brain) was able to both see the added sharpness of the short exposure and mentally integrate it with the blur.
I had the impression (also subject to further experimentation) that the image looked slightly more natural than run-of-the-mill 24p digital camera panning, more like a film camera’s image would. The blur of a moving image, captured on film, has a smooth “fade-in, fade-out” quality due to the penumbral sweep of the rotating shutter; the ends of motion trails in digital capture have harder edges from the “instant-on, instant-off” integration period of an electronic shutter, which give 180-degree electronic shutters a harsher, more staccato motion rendering. Does the bright-and-sharp, dim-and-blurred combo of “Magic Motion” mimic the penumbral feathering of motion blur, or at least stimulate the human visual system similarly? My gut impression is that it is working in a similar way, but I really need to do a side-by-side comparison before I say it’s anything more than my own, entirely subjective feeling.
Jim claims that at +6, EPIC is capable of capturing 18 stops of dynamic range, which is more than most people routinely need; he thinks +3 (about 15-16 stops) will be the most commonly-used setting—when it’s used at all. Most well-lit, well-controlled scenes won’t need any HDR boost to begin with; the Mysterium-X sensor captures a fairly wide dynamic range as it is.
He showed me another clip on his Mac, shot behind Stage 4 on a sunny day some time between 10am and noon (just guessing from the shadow angles). There was a small car with metallic silver paint; there was another car with a shadowed, black grille. The highlights on the silver car were unclipped, and the shadows in the black grille weren’t crushed. That clip was shot at HDRx +3, and it was just fine.
To summarize, then, and expand a bit:
• EPIC HDRx captures “two different exposures that are conjoined”, a short one for highlights and a normal one for the main image. The short exposure can be set for +3, +4, +5, or +6 stops of highlight capture.
• The two exposures can be combined in-camera (“EasyHDR”) and output as a single stream. The image shows normal motion blur over the lower part of the tonal scale, but adds a sharper trailing edge on highlights (“Magic Motion”).
• The two exposures can be stored as separate tracks in the camera for post-processing (“HDRx”). In post, the tracks can be combined quickly for a “Magic Motion” look, or, with a bit more processing using Foundry-written code, as MNMB (“More Normal Motion Blur”) that is expected to look just like regular shutter blur throughout the entire tonal scale.
• In full-speed playback, “Magic Motion” looks very much like normal shutter blur, much more so than still frames of Magic Motion would lead you to expect. In slo-mo, it becomes more apparent; at 50% speed, it’s noticeable but oddly undisturbing; at 25% it’s a definite “look” (download the clip and play with it in the NLE of your choice, and see what you think). Some people will like the look; others will think it’s an abomination. That’s freedom, isn’t it?
• HDRx is an option: use it if you want, leave it off if you prefer.
• Shooting HDRx for post may offer significant advantages for VFX: the sharp highlight track may make motion tracking faster and more accurate, and may be worth capturing for that purpose alone, even if it’s not used in processing the main image itself.
• These are early days: there’s a lot more work to be done on the blending algorithms and overall implementation. What I saw was a work in progress, and no release date has been specified.
• EPIC HDRx is currently confined to EPIC. RED ONE lacks the processing power to do it, even with the Mysterium-X upgrade. There is some discussion of putting it in Scarlet, but that would probably kick the price up by $1000, and it’s unclear that the Scarlet market would justify it (remember the optical finder option for RED ONE? RED never got a single order for one… so they never built it).
Overall, a most interesting demo. HDRx will be a much-appreciated addition to the RED camera toolkit, and I look forward to exploring it further once I can get my hands on a production-model EPIC.
FTC Disclaimer: No material connection exists between me and RED, other than as a customer. My employer, Meets The Eye LLC, purchased three RED ONEs two years ago on my recommendation, and we have since purchased two M-X upgrades (soon to be three), three RED ROCKET hardware decoders, two 18-85mm zoom lenses, and a selection of accessories. We are in line for EPIC-X upgrades, which we applied for several weeks ago. I do not personally own any RED products nor do I have any financial interest in the company. I paid my own way to Los Angeles ($182.95 airfare and rental car) and received no material compensation from RED, other than a Starbucks Coffee Frappuchino from the company fridge: retail value about $2.50.
Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop Now