Site icon ProVideo Coalition

Bayer chip best

Stained glass window including red, green, blue and uncoloured panes.
“Tribute to Britecell,” as the artist didn’t call it. By Pexels user Igor Starkov.

One of the cutest things about subpixels on camera sensors is the way manufacturers hire teams of magical elves to put the colour filter array on the front, because only elves have small enough fingers to handle such tiny pieces of red, green and blue stained glass. A big advantage of this approach is that the elves are generally allowed, under union rules, to put almost any combination of colours you want on there. People have regularly done so, and advertised the result using the sort of trademarked terminology that’s been approved by a focus group.

There are a number of reasons for doing this. Sony put light-green pixels in the Cyber-Shot DSC-F828 in 2003 with the idea that it’d improve sensitivity (and thus potentially some combination of dynamic range and noise, too) as well as colour gamut. Similar technology was used in the Nikon Coolpix 8700 in 2006, acamermong many others. Samsung calls its white-pixel technology Britecell, and Fujifilm’s current X series cameras use the X-Trans layout, which includes a much larger preponderance of green elements. It’s not the first time the company has done something like that; its Super CCD technology used octagonal subpixels in essentially a 45-degree rotated grid pattern. A related approach was taken for the sensor used in the Sony F65, presumably in pursuit of less aliasing, and the F35 (and the essentially equivalent Panavision Genesis) used vertical RGB stripes.

Sensors in the factory; image courtesy David Gilbom of Alternative Vision Corporation. As technology improves, subpixel arrangement may become less relevant.

Not universal

So, the idea that Bayer sensors are universally adopted certainly isn’t true. If there’s a problem with all this, that 45-degree rotated sensor approach is talismanic of it. Yes, that avoids having vertical and horizontal rows of subpixels that will come dangerously close to lining up with all the vertical and horizontal edges which exist in human-made worlds, and thus potentially reduces aliasing. Of course, that does mean it’ll deal less well with edges which happen to be near 45 degrees to the vertical, such as chain link fences and the tombs of ancient Egyptians, though there’s probably a serviceable argument that more sharp details in the average picture are horizontal or vertical than they are diagonal.

Different colour layouts suffer similar issues. Even within the limits of a conventional Bayer filter layout, manufacturers are free to choose exactly which red, which green, and which blue is used. Denser filters potentially let the camera see colour more accurately, but absorb more light. Paler filters provide some combination of reduced noise, higher sensitivity or increased dynamic range, probably at the cost of colour precision. Adding filters of other colours is subject to many of the same engineering compromises. As evidenced by the continuing popularity of Bryce Bayer’s original design, there’s some argument that the situation represents more or less a zero-sum game; the upshot of all this has often been resounding indifference.

Like a lot of things in photography, this stuff works in both directions. We’re used to monitors having combinations of red, green and blue subpixels just like a camera (in fact, invariably in columns, just like an F35). Keen-eyed visitors to the part of this year’s NAB show that deals with interesting new ideas might have noticed at least one multi-primary display technology being demonstrated using a single LED video wall panel incorporating cyan emitters. Issues over colour precision that emerge when using unfiltered (that is, white) subpixels on a camera sensor are vaguely analogous to the issues that emerge when we use white subpixels on a display, although in a camera that’s apparently a good thing. The requirements are slightly different when we’re trying to persuade the human visual system to behave as we’d prefer, though some of the same considerations apply.

Size conquers all. Especially the focus puller. Image by the author.

Subpixels less relevant

In the end, there’s some reason to believe that all of this could become less and less impactful as fundamental improvements to sensor technology keep on coming; as the sum underlying that zero-sum game becomes larger. We can have 14-stop, 4K, Super-35mm cameras, as Arri has shown. Even more illuminating are the 12K Super-35mm sensors which Blackmagic had engineered for its 12K Ursa. Those photosites are so small that it’s not quite aiming to work in the same way as all those other sensors. That sort of configuration is diffraction limited at more than about an f/4-5.6 split, and probably resolution limited by at least some lenses in any event. The intent is not that every photosite is valuable resolution-wise. The intent is that a group of several forms a composite colour pixel.

If the intent was to build a really good 4K camera, and if this represents the future, well, great. All of this is likely to become a less and less relevant consideration as time goes on, in much the same way that sheer resolution has gone the same way. Whether that’ll discourage anyone from patenting another configuration of red, green, blue, and octarine remains to be seen.

Exit mobile version