Site icon ProVideo Coalition

Bayer best?

Sensor of a Blackmagic Ursa Mini camera, showing spectral colour effects from its tiny features
This is a Blackmagic Ursa Mini’s sensor. The colour is not from the filter array, it’s from interference between the wavelength of the light and the fine pitch of features on the chip

Building colour images from red, green and blue is probably one of the most fundamental concepts of film and TV technology. Most people move quickly on to the slightly more awkward question of why there are three components, and are often told that it’s because we have eyes with red, green and blue-sensitive structures, and we’ve often built cameras which duplicate that approach.

The reason we’re discussing this is, in part, because of an interesting design from Image Algorithmics, which has proposed a sensor which it describes as “engineered like the retina.” That could refer to a lot of things, but here it refers to IA’s choice of filters it calls “long,” “medium” and “short,” diverging from Bryce Bayer’s RGB filters. The design is interesting, because that’s the terminology often used to describe how human eyes really work in practice. There isn’t really red, green and blue; there’s yellowish, greenish and blueish, and none of them are deep colours.

Human colour

A quick glance at the data makes it clear just how unsaturated those sensitivities really are in human eyes. It’d be easy to assume that the humans might struggle to see saturated colours in general, and red in particular. The sensitivity curves are so wide that light of that colour might just look like a pale green and an equally powdery and faded yellow, and the yellow and green overlap enormously. In practice, the human visual system detects red by (in effect) subtracting the green from the yellow, a biological implementation of the matrix operations we see in some electronic cameras.

When Bayer was doing his work in the 1970s, it might have been possible to build a sensor with long, medium and short-wavelength sensitive filters that match the human eye. What might have been trickier would have been the resulting need for compact and power-frugal electronics capable to turn the output of such a sensor into a usable image. So, Bayer took the direct route, with red, green and blue filters which nicely complemented the red, green and blue output of display devices. Modern Bayer cameras use complex processing, but early examples were often fairly straightforward and mostly worked reasonably well.

With modern processing it works even better, so the question might be what Image Algorithmics expects to gain from the new filter array. The simple answer is that less saturated filters pass more light, potentially enhancing noise, sensitivity, dynamic range, or some combination thereof. Image Algorithmics proposes a sensor with 50%, yellow, 37.5% green, and 12.5% blue subpixels, which approximates the scattering of long, medium and short-sensitive cone cells across the human retina.

Image Algorithmics’ colour filter array comparing conventional Bayer filters and the configuration of the eye. Drawing courtesy the company

Existing ideas

This is not entirely new; Sony used an emerald-sensitive pixel (which sort of looks cyan in most schematics) on the Cyber-shot DSC-F828 as early as 2003, while Kodak used cyan, magenta and yellow filters in the Nikon F5-based DCS 620x and 720x around the turn of the millennium. Huawei has made cameras made cameras in which the green element of a Bayer matrix is replaced with yellow. The Blackmagic Ursa Mini 12K uses a sensor with red, green, blue and unfiltered photosites, presumably yielding gains which are very relevant to such a densely-packed sensor.

Other approaches have also been explored. Kodak’s cyan, magenta and yellow sensor, using secondary colours, allows fully double the light through the filter layers, though the mathematical processing required often means turning up the saturation quite a bit, which can introduce noise of its own. The differing sensitivity of the sensor to cyan, magenta and yellow light can also offset some of the improvement. IA itself voices caution about Huawei’s red-blue-yellow design, which encounters some odd mathematical issues (which are a bit outside the scope of this article) around using red filters to approximate the human response to red light.

Dyes are a common source of – er – interesting results in digital imaging, since the very deep colours, which don’t exist in nature, can end up reflecting a strange spectrum of light.

The inevitable compromise

Suffice to say that in general, no matter what combination of colours is used, there will be a choice to make, often between brightness noise, colour noise, or sensitivity and dynamic range. For complicated reasons, colour noise is easier to fix than brightness noise, and it’s mainly that idea which has led IA to the green-blue-yellow layout it favours here.

The company suggests that the design should achieve a 4.25dB signal to noise ratio advantage over RGB “in bright light,” and perhaps a bit more than that in lower light. That may not seem astounding, although the company promises us a similar improvement in dynamic range, with a total improvement of more than a stop. Encouraging as that is, we should be clear that this is an idea, without even a demonstration sensor having been made, and it’s clearly some time from a film set near you.

What really matters is not this particular design; alternative filter arrays have been tried before. Given the overwhelming majority of cameras still use Bayer sensors, we might reasonably conclude that the results of previous experiments have not been overwhelmingly positive. Cinematographers are also a cautious bunch, and anyone proposing anything as (comparatively) outlandish as an LMS sensor might need a strategy to handle the caution as well as the technology itself – but if some sort of alternative to Bayer can be made to work, then it’s hard to object.

Exit mobile version