Monochrome camera sensors are capable of higher detail and sensitivity than would otherwise be possible with color. However, to understand why requires a closer look at sensor technology. This article gives an overview of the key differences between each sensor type, along with how these influence the resulting image.
INTRO TO PHOTOSITES
Virtually every digital sensor works by capturing light in an array of photosites, similar to how a grid of buckets would store falling rain. When the exposure begins, each photosite is uncovered to collect incoming light. When the exposure ends, the occupancy of each photosite is read as an electrical signal, which is then quantified and stored as a numerical value in an image file.
However, the above photosites only measure the quantity of light. To achieve color, photosites also need a way to distinguish and record values separately for each color.
Color sensors work by capturing only one of several primary colors at each photosite in an alternating pattern, using something called a "color filter array" (CFA). The most widespread and successful CFA has been the Bayer pattern, which uses alternating rows of red-green and green-blue filters:
A necessary but undesirable side-effect of CFAs is that each pixel effectively captures only 1/3 of incoming light, since any color not matching the pattern is filtered out. Any red or blue light that hits a green pixel won't be recorded, for example.
The CFA also means that at any given photosite location, one color will be measured directly and the other two will have to be inferred. This process of intelligently combining photosites to produce full color pixels is referred to as "demosaicing" (or "debayer" within REDCINE-X PRO®), and happens during RAW development.
Note: the above diagram is a simplified example of demosaicing. Actual algorithms are typically more sophisticated, and may adapt depending on edges, texture or other image content.
However, demosaicing is less of a disadvantage than the above diagram might lead one to believe. Detail can actually be extracted very efficiently, in part because Bayer arrays have been a well-studied standard for over a decade. In practice, not requiring demosaicing would have improved resolution by roughly 20% - definitely noticeable, but not the improvement one might initially expect. See resolution vs. aliasing for one reason why.
Unlike color sensors, monochrome sensors capture all incoming light at each pixel regardless of color. Each pixel therefore receives up to 3X more light, since red, green and blue are all absorbed:
This translates into a 1 to 1.5 stop improvement in light sensitivity. This can also improve the appearance of noise when shooting under artificial light or other color temperatures which differ substantially from daylight.
Unlike with color sensors, monochrome sensors do not require demosaicing to create the final image. The values recorded at each photosite effectively just become the values at each pixel. As a result, monochrome sensors are able to achieve a slightly higher resolution:
Another benefit is that monochrome sensors typically have more predictable highlight clipping. With color, clipping can occur in just the red, green or blue channel, whereas with monochrome, clipping is an all-or-nothing process. In practice, this can mean that images will have a slightly higher usable dynamic range, especially when considering that monochrome shadow noise is typically less objectionable than with color.
In many ways, the advantages of monochrome sensors are very similar to traditional monochrome film: image noise is lower at equivalent ISO speeds, and resolution is higher. Such improvements in image quality can be critical when shooting video that might later be used for stills in print advertisements.
However, not all advantages will be realizable, depending on intended use. For example, those familiar with traditional black and white photography may want to use lens-mounted color filters to control scene contrast. This might include using a red filter with landscapes, since these normalize the otherwise stark contrast between blue skies and green foliage, while also enhancing local contrast within each region. On the other hand, color filters also reduce available light by up to two thirds, thereby offsetting any sensitivity gain from monochrome.
One should also consider whether the quality of a monochrome sensor outweighs the flexibility of a color sensor. One can always convert color into monochrome afterwards, for example. Furthermore, with color capture, any arbitrary color filter can be applied in post production to customize the monochrome conversion, whereas with monochrome capture, the effects of a lens-mounted color filter are irreversible. Overall though, when output flexibility isn't needed, proper monochrome capture will always produce superior results.