Back To Listings RSS Print

RED Visit, 21 September: HDRx

RED's approach to HDR capture works better than it should.

By Adam Wilt | September 23, 2010


Last week, Jim Jannard at RED invited me to come down to RED Studios to talk about EPIC and HDRx, RED's new High Dynamic Range capture mode. I made the trip this past Tuesday, and had the opportunity to sit down with Jim, Jarred Land, and Deanan DaSilva for three hours. Here's what I learned about HDRx; a report on EPIC will follow.

EPIC HDRx



"The Mysterium-X sensor was designed for HDR in the first place, but the RED ONE doesn't have the capability to support it, and we sort of forgot about it", Jim said. "Then along came Alexa, and all this talk of HDR, so we looked at it again in the EPIC."

Jim and Jarred took a prototype EPIC with EPIC HDRx firmware to Las Vegas on Saturday and shot several clips, including a clip he posted on reduser.net last Sunday.


If you haven't seen this clip yet, grab it now (suggestion: open QuickTime Player or QuickTime Player 7, use File > Open URL..., and load http://red.cachefly.net/17hx.mov).

Compare it to a conventional still-camera photo of the same location here, and look at the difference in highlight rendering, especially in the "Binion's" signs and the KENO CRAPS ROULETTE along the edge of the building. Then play the clip, and (once you're done watching the motion), leave it parked around the nine second mark. Observe the rendering of the LEDs in the "don't walk" sign, and the incandescent bulbs on the canopies of the Gold Rush casino on the left side of the picture.

Here's the reduser thread about the clip: http://reduser.net/forum/showthread.php?t=49940

Here's the thread about EPIC HDR in general: http://reduser.net/forum/showthread.php?t=49668


This sample clip is flat, lacking in contrast; it's probably not something you'd use directly. The point of it is to capture as much dynamic range in the image as possible, so you have more freedom in grading to play with the tonal scale as you wish.

What RED has done is come up with a way (currently the subject of a patent application, so I cannot be more specific) of getting "two different exposures that are conjoined", a normal exposure to capture the bulk of the tonal scale, and a much shorter exposure to capture highlight detail that would otherwise be blown out. These two exposures can be combined in-camera, which RED calls "EasyHDR", or stored as two separate streams for combining in post with more control: "HDRx".

EPIC HDRx can be set for +3, +4, +5, or +6 stops of additional highlight capture. At +3, which is what the Vegas clip was shot at, that implies an effective exposure time of 3 stops less for the highlights image: if you're shooting 24fps with a "180 degree shutter", your main exposure time will be 1/48, and your highlight exposure time would thus be 1/384 second.

The two exposure are then combined using a fixed tone mapping, so you don't get any sort of exposure pumping, or local-adaptation edging or haloing, as the shot progresses. Different tone maps with different "crossover points" and mixing profiles will likely be selectable both in camera (for EasyHDR) or in post (for HDRx post-processing). RED's reclusive, clever Canadian boffin, Graeme Nattress, is actively working on the tone mapping methods; what was used in the Vegas clip and what I saw in LA was only his first cut, and Jim said that he was expecting updated software later that day.

If EasyHDR in-camera combining is used, the image will look like the Vegas clip as far as motion rendering is concerned: the two images are combined as-is, tone mapped but otherwise unprocessed (and Graeme's improved tone mapping will be available in-camera as well as in post; a Vegas clip shot next Sunday may look better than the Vegas clip posted last Sunday). Looking at the clip, paused at the nine second mark, there's a short, sharp highlight image, followed immediately—without any gap—by a normally-blurred main exposure. As the two exposures are merged prior to recording, there's no increase in data rate for EasyHDR.


Detail from the h.264 1500x750 Las Vegas clip at the nine second point.


Observe the LEDs in the "don't walk" sign and the bulbs on the canopies: there's a dim, motion-blurred streak for 1/48 sec, with a sharp, 1/384 image at the trailing edge of the blur: visually similar to the effect of a long exposure on a still camera with a "first curtain" flash, a flash fired at the beginning of the exposure. It may seem backwards at first, but remember that the bulbs are moving to the right in the image, thus their blurs precede their sharp images: the reverse of the standard cartoon convention, in which the blurs of a fast-moving subject trail their sharp leading edges.

I asked Jim if it would be possible to reverse the order of exposure, to simulate the look of a "second curtain" flash, in which the blur follows the sharp image instead of preceding it, giving us nice still frames that match our comic-book-driven expectations. He gave me a pained look, and agreed that it would be nice, but it wouldn't be in the first release, and it would be a "science project" to see if it could be made to happen at all.

If you defer exposure combining to post-processed HDRx, the two images are stored as separate tracks in the same REDCODE file. Yes, this stores twice the data, and may require up to twice the data rate if one is to maintain the same compression levels (the highlight track will use different encoding parameters to optimize the data rate, so it usually won't be twice as much). Even so, EPIC provides REDCODE 50 and REDCODE 100 (RED ONE normally shoots in REDCODE 28 or REDCODE 36), so this isn't an insurmountable issue.

In HDRx, one may chose a straight combination of the two exposures, just as EasyHDR does; RED calls this "Magic Motion". It's quick and simple.

Alternatively, one can choose MNMB: More Normal Motion Blur. MNMB uses optical-flow analysis being developed in cooperation with The Foundry, a UK-based developer of high-end VFX software (does "Nuke" sound familiar?), to motion-blur the highlights image to match the normal image before combining them. The net effect should be a final image that looks just like a normal, 1/48 sec motion-blurred image, just a very flat one with greater-than-normal dynamic range. It will likely be a bit slower to render than "Magic Motion", and it's definitely too complex and slow a process to perform in real time in the camera itself.

I wasn't shown a MNMB image (I don't think the software is done yet), but I have high hopes; The Foundry writes very clever code. They have both the sharp highlight image available for precise motion tracking and the blurred normal image to show them exactly what motion path to follow between frames; they can use spatial cross-correlation to refine their post-blurred highlight image before merging, so I expect the result will look very "normal" indeed.

(Those same advantages may be a great boon for VFX compositors: if you have a sharp image for motion tracking, you can pull a more stable track with less bobble and jitter, requiring less manual cleanup. I can see capturing the highlights pass for tracking alone, even if you only use the normal exposure in the actual image. HDRx will allow that workflow.)

What I did see was the raw REDCODE 50 file from the Vegas shoot (along with several other clips shot in Vegas and at RED Studios) brought into Graeme's in-house development software, RAND, which Jim said stood for Research ANd Development (and Jarred described as Research And Nattress Development!). RAND looks like a version of RED Alert! on steroids, but built with the same UI toolkit as REDCINE-X.

Jim stepped through the raw clip; in REDCINE-X it appears as alternating bright (normal) and dark (highlight-exposure) frames, a display mode I found fascinating, and I hope will be available in whatever processing tool winds up being released. RAND shows each stream individually, or as the combined HDRx image, and it has some limited tools for tweaking the HDR tone mapping. RAND outputs TIFF files, and Jim decided to spit out TIFFs at half-res (2.5k pixel width) to load into their theater playback system, a DVS Clipster, for playback on their 4K-capable Sony SXRD projector.

Jim, Jarred, and Deanan all expressed some nervousness about doing this with me present; they hadn't looked at this clip (or any other HDR clips) on the big screen before: they only shot them on Saturday, rendered a low-res copy for the web on Sunday, and were swamped with other tasks on Monday, so they hadn't had the chance. They weren't sure it would hold up well, but they decided to go ahead and do it anyway.

It took about fifteen minutes to render half-res TIFFs of this 00:12:18 shot (remember, this is development code; it hasn't been optimized in any way, shape, or form), after which Jim put them on an SSD he borrowed from Jarred's desk and handed off to Deanan. Deanan dumped the frames into Clipster, which took a while; I was busy poking and prodding an EPIC at the time, so I didn't time it precisely. Eventually it was done, and all four of us walked over to the screening area at the other end of Stage 4 and watched playback.

It looked just fine.

Let me clarify that. It looked better than it should: it looked normal.

When you look at a still frame from a "Magic Motion" or "EasyHDRx" clip, with the blurred normal exposure and the sharp highlights exposure, you might expect that playing that clip at 1x would give you a cross between normal blurry motion and short-shutter, staccato, pixilated motion—the "Gladiator" or "Saving Private Ryan" look.

It didn't look like that at all. It looked just like a regular, 1/48 sec exposure, with perfectly normal motion blur. I squinted and stared; I tried keeping a fixed gaze as well as following the motion; I stood right up close to the screen; I stood back where the screen "only" spanned 90 degrees of arc; I stood back at a normal viewing distance. We watched playback of the looped clip for about a minute, then I asked Deanan to pause the clip on a motion-blurred part, around nine seconds in, just to verify that the clip had the sharp/blurred "Magic Motion" look, and they hadn't snuck in a MNMB clip on the sly (they hadn't). For whatever reason, my eye and brain "read" the moving clip as a perfectly normal image, not a staccato "Saving Private Ryan" battle scene with a bit of low-key blur mixed in.

It's actually a bit disturbing how well it works.

Jim had the impression of greater sharpness than he would otherwise expect to see. I wasn't sure about that, myself; I couldn't say (without doing proper side-by-side tests) that the brain (or at least, my brain) was able to both see the added sharpness of the short exposure and mentally integrate it with the blur.

I had the impression (also subject to further experimentation) that the image looked slightly more natural than run-of-the-mill 24p digital camera panning, more like a film camera's image would. The blur of a moving image, captured on film, has a smooth "fade-in, fade-out" quality due to the penumbral sweep of the rotating shutter; the ends of motion trails in digital capture have harder edges from the "instant-on, instant-off" integration period of an electronic shutter, which give 180-degree electronic shutters a harsher, more staccato motion rendering. Does the bright-and-sharp, dim-and-blurred combo of "Magic Motion" mimic the penumbral feathering of motion blur, or at least stimulate the human visual system similarly? My gut impression is that it is working in a similar way, but I really need to do a side-by-side comparison before I say it's anything more than my own, entirely subjective feeling.

Jim claims that at +6, EPIC is capable of capturing 18 stops of dynamic range, which is more than most people routinely need; he thinks +3 (about 15-16 stops) will be the most commonly-used setting—when it's used at all. Most well-lit, well-controlled scenes won't need any HDR boost to begin with; the Mysterium-X sensor captures a fairly wide dynamic range as it is.

He showed me another clip on his Mac, shot behind Stage 4 on a sunny day some time between 10am and noon (just guessing from the shadow angles). There was a small car with metallic silver paint; there was another car with a shadowed, black grille. The highlights on the silver car were unclipped, and the shadows in the black grille weren't crushed. That clip was shot at HDRx +3, and it was just fine.

To summarize, then, and expand a bit:

• EPIC HDRx captures "two different exposures that are conjoined", a short one for highlights and a normal one for the main image. The short exposure can be set for +3, +4, +5, or +6 stops of highlight capture.

• The two exposures can be combined in-camera ("EasyHDR") and output as a single stream. The image shows normal motion blur over the lower part of the tonal scale, but adds a sharper trailing edge on highlights ("Magic Motion").

• The two exposures can be stored as separate tracks in the camera for post-processing ("HDRx"). In post, the tracks can be combined quickly for a "Magic Motion" look, or, with a bit more processing using Foundry-written code, as MNMB ("More Normal Motion Blur") that is expected to look just like regular shutter blur throughout the entire tonal scale.

• In full-speed playback, "Magic Motion" looks very much like normal shutter blur, much more so than still frames of Magic Motion would lead you to expect. In slo-mo, it becomes more apparent; at 50% speed, it's noticeable but oddly undisturbing; at 25% it's a definite "look" (download the clip and play with it in the NLE of your choice, and see what you think). Some people will like the look; others will think it's an abomination. That's freedom, isn't it?

• HDRx is an option: use it if you want, leave it off if you prefer.

• Shooting HDRx for post may offer significant advantages for VFX: the sharp highlight track may make motion tracking faster and more accurate, and may be worth capturing for that purpose alone, even if it's not used in processing the main image itself.

• These are early days: there's a lot more work to be done on the blending algorithms and overall implementation. What I saw was a work in progress, and no release date has been specified.

• EPIC HDRx is currently confined to EPIC. RED ONE lacks the processing power to do it, even with the Mysterium-X upgrade. There is some discussion of putting it in Scarlet, but that would probably kick the price up by $1000, and it's unclear that the Scarlet market would justify it (remember the optical finder option for RED ONE? RED never got a single order for one... so they never built it).

Overall, a most interesting demo. HDRx will be a much-appreciated addition to the RED camera toolkit, and I look forward to exploring it further once I can get my hands on a production-model EPIC.


FTC Disclaimer: No material connection exists between me and RED, other than as a customer. My employer, Meets The Eye LLC, purchased three RED ONEs two years ago on my recommendation, and we have since purchased two M-X upgrades (soon to be three), three RED ROCKET hardware decoders, two 18-85mm zoom lenses, and a selection of accessories. We are in line for EPIC-X upgrades, which we applied for several weeks ago. I do not personally own any RED products nor do I have any financial interest in the company. I paid my own way to Los Angeles ($182.95 airfare and rental car) and received no material compensation from RED, other than a Starbucks Coffee Frappuchino from the company fridge: retail value about $2.50.

Editor's Choice
PVC Exclusive
From our Sponsors

Share This

Back To Listings RSS Print

Get articles like this in your inbox: Sign Up

Comments

Benjamin Rowland: | September, 23, 2010

This article was a thorough yet easy to understand explanation of this intriguing tech.  Thanks!

Charles Angus: | September, 23, 2010

Great article. I hadn’t thought of using the short shutter stream for tracking - but it makes so much sense! That will be a real boon to comping.

And all that DR… Wow.

Martin Weiss: | September, 23, 2010

Thanks for sharing.

I really do like the smooth motion you get in the pan. (This feature easily drowns in the amazing fact of having 18 stops of latitude.)

DJ Joofa: | September, 23, 2010

Interesting article. However, it seems like Red is in financial trouble. If they invited you they should have paid for the airfare + rental car. And, they should have taken you out to a dinner/lunch instead of a “Starbucks Coffee Frappuchino from the company fridge”!

Adam Wilt: | September, 23, 2010

“It seems like Red is in financial trouble.” Wait for Part 2; as you’ll see, the evidence is otherwise.

“They should have paid for the airfare + rental car.” That would be what is known as a “paid junket”, and the level of objectivity that a journalist on such a junket can be expected to maintain is questionable at best.

Especially with RED, a company subject to so much controversy over their marketing methods, I consider it essential to retain as much separation as possible from any charges of publishing a “paid review” or “advertorial.” I did not ask for compensation and none was offered (when Jim first wrote me, he had the impression I was local to LA, and could just stroll over for the afternoon). If RED had offered to pay my expenses, I would have declined.

“They should have taken you out to a dinner/lunch.” They did offer to buy lunch next door at the Studio cafeteria, but I was entranced by the sight of Jarred swapping lens mounts on the EPIC while it was still running (details in part 2), and asked about photographing the process. Jim said, “do it now”, and we just wound up talking through lunchtime. I had a flight back that afternoon, so dinner wasn’t a possibility.

I mention the $2.50 Frappuchino because (a) by strict interpretation of the FCC rules, I should; and (b) to contrast it with the cost I personally incurred, so that if the conspiracy theorists come after me saying I’m giving RED a positive writeup because they’ve bought my soul, I can say, “would you sell your soul to Jim Jannard for a $2.50 coffee, and NOT ask for travel expenses as well?”  (grin)

larry: | September, 24, 2010

Thank you for this very nice text that nicely explains the basic idea behind RED HDR system. It is indeed interesting why the image appears so pleasing to the eye.

jasorod: | September, 24, 2010

Great article Adam.  I’m curious to see what’s entailed in the patent application ... there is quite a bit of prior art in the area of mixing exposures from multiple read-outs of the sensor using a shuttering or other read-out mechanism to generate HDR imagery.

Stephen Webb: | September, 24, 2010

Hi Adam,

Great article. Just a thought, although you’re right to point out that the sharp highlight preceding the motion blur looks odd in a still, at 24fps it would look fine. At that speed your brain (if it even notices) would probably just assume that the motion blur precedes the sharp highlight in the next frame. Might even explain why it looked slightly more filmic to your eye.

IEBA: | September, 24, 2010

I thoroughly appreciated the information presented here and look forward to MNMB and HDRx as options for final output, but having each available for specific purposes during production. We’re really starting to see digital take us to places and capabilities that film could not imagine. Heck, you have to use special cameras just to reduce film wobble at the gate.

I appreciate you disclosing your expenses. Not every review site does this and it really makes a difference, especially when PVC now has such a direct, sponsored, tie-in with Sony, that readers are informed of exactly what might be going on behind the scenes.

Guustaaf: | September, 24, 2010

I have always been a big fan of Sony, but that ad over the home page is beginning to annoy me.

Ryan Damm: | September, 27, 2010

Won’t this HDRx wreak complete hell with anything requiring sync?  (Strobes, fluorescents, muzzle flare….)

I’m still excited to play around with it.

Martin Weiss: | September, 28, 2010

Ryan,

Just have a look at the example posted at Reduser - there is a fire engine driving by with flashing lights.

Video is here: http://red.cachefly.net/17hx.mov

Discussion here: http://www.reduser.net/forum/showthread.php?t=49940

Ryan Damm: | September, 28, 2010

Yeah, I saw the example—it’s very difficult to tell if it’s missing anything due to temporal sub-sampling; I suppose that’s a good sign, but it’s hardly definitive.  (In particular, I could imagine it’s difficult to take out flickering sources both in the highlights and the lowlights, something that test clip won’t indicate.)

And if there’s good info on that reduser thread, I probably won’t find it among the noise.  It’s increasingly tedious to pick out any real information on that forum, particularly if the words “epic” or “scarlet” get mentioned (especially scarlet). 

I really wish they had a ‘pro’ version of the forum, or something—every once in a while there’s good information, but it’s usually less than one post per page that matters.

Chris Meyer: | September, 28, 2010

I applaud the idea of trying to get the benefits of HDRI into more of our hands, but I worry about the time delay between the center of the two exposures. For one, I can imagine this might drive some VFX people crazy (for roto work & matchmove work, for example). But I’m speculating, and wouldn’t mind at all being wrong.

In a perfect world, two exposures would be taken at the same time, with the same shutter speed. That implies two sensors with a ND filter (or electronic equivalent) on one of them.

I wonder if there is any mileage in a two-sensor camera body that can be (somewhat) easily reconfigured between HDR and stereo…

Brian Harris: | October, 11, 2010

Stephen’s comment gave me an interesting idea.  What would happen if you combined the long exposure image with the next frame’s short exposure image?  That should give you the blur trailing the highlight.

Adam Wilt: | October, 11, 2010

“What would happen if you combined the long exposure image with the next frame’s short exposure image?” Unless you shoot with a fully open shutter, e.g., 360 degrees or 1/24 sec, there will be a gap between the end of the preceding frame’s long blur and the following frame’s short exposure.  And even then, there’s likely a small reset time that prevents to two exposures from being contiguous and uninterrupted, darn it.

Not that it seems to matter in moving images; the current “magic motion” look just works. It’s important for extracted stills, though, so I hope RED can rejigger HDRx to work on the “second curtain”. Not the end of the world if they can’t, but it would be nice if they could.

Floris Liesker: | October, 27, 2010

I did some MNMB imitation on the footage and I think it works like a charm. See http://vimeo.com/16244891
Now you say MNMB is definitely too complex and slow a process to perform in real time in the camera itself, and my After Effect render times sort of confirm that.
However.
Many modern flatscreen TV’s add in-between frames to make motion more smooth. Watching a film in that mode is like degrading it to a soap, in my opinion.
But nevertheless, a mid-priced flatscreen tv is capable of creating new interpolated frames based on motion estimation IN REAL TIME. I guess some sort of single-purposed hardware is doing the math for that.
Those algorithms could easily, instead of creating 8 new frames, blend those 8 together (the 180 degree samples, mind you) and create a smooth motion blur. In real time.
I suppose it wouldn’t be doing as good a job as the Foundry software is doing, but hey, this is real time! And I wouldn’t be surprised if it was cheap too, being mass-made.
Oh, the mass made stuff isn’t 5K res, I forgot. Oh well.
Maybe a 1080p motion blur in the EasyHDR™ preset would be looking nice enough. In celluloid film the highlights tend to be softer anyway, I think it would look fine.

Please login or register to comment