The Samsung Galaxy S7 is rumored to have a a 12MP BRITECELL rear-facing camera, featuring an f/1.8 fast aperture. To make it more compelling, the rumors are also taking about a waterproof and metal design (although the leaked pictures shows plastic design). Rumors are also suggesting a 5-megapixel camera front camera, 6.7-inch Quad HD AMOLED display and 3,000mAh battery.
So first of all, what is that Britecell sensor that is mentioned everywhere? Well, this is a a new sensor announced by Samsung in 2015. It doesn’t exist yet in any other smartphone and expected to be used in the Galaxy S7. The sensor is an improved version of the ISOCELL technology. It produces images with less artifacts and improved performance even when smaller pixels are used. The sensor module is also smaller (17% reduction in height according to Samsung) than the ISCOCELL, allowing Samsung to create smartphones with a slimmer profile. One of the biggest changed Samsung implemented in this new BRITECELL sensor is that it has white pixels that are replacing the green ones (aka RBW).
It’s not the first time we’ve heard about a sensor using white pixels, also known as RGBW. For example, the Huawei P8 camera uses a 13MP 5-color RGBW color filter array. Instead of having a RGGB matrix as in conventional Bayer color array, one green pixel is replaced with a white pixels. This allows the sensor to collect more light, because the light is not filtered at that specific white pixels. This means that the interpolation algorithm has more data to work with and to construct the color, besides the fact that it collects more light photons to improve the low-light performance.
Here’s an article on Sony.net demonstrating the differences between two cameras, one using a Bayer array sensor, the other one with “White” (transparent) pixels. The BRITECELL array is different than the RGBW because it lacks the green (G) pixels. This means that in the 9-color matrix, there we’ll be 5 white pixels that will respond to all colors of light. In RGBW sensors, the array is designed so it can be processed using standard Bayer demosaicing algorithm, but I don’t know whether Samsung came up with its own algorithm to process the data from the sensor or it was also designed to work with the standard one.
Now, because the sensor collects more light photons, Samsung can now use smaller pixels, like 1.0 micron, and still achieve the same light sensitivity compared to a sensor with 1.12 micron pixels.
I am personally not a big fan of high-resolution sensors in mobile devices. I hope that Samsung wasn’t tempted to use a sensor resolution higher than 12MP. I think that 12MP with a relatively large sensor (e.g. 1/2.3″) and with this technology, can yield exceptional results. A fast F1. 8 aperture is also very useful to promote better low-light performance. If will indeed have a F1.8 lens, it will be faster than the Note 5 (f/1.9) and iPhone 6s Plus (f/2.2), but same as the LG V10 and LG G4 lens. We can also assume that the new Samsung Galaxy S7 will also have an optical image stabilization to promote better low-light performance when shooting static subjects and phase-detection autofocus system for better subjects tracking performance.
The combination of an advanced sensor technology, pixels size, aperture and image processing, if done well, can have great implications on the image quality. We can just hope that the Samsung Galaxy S7 (and S7 edge) will indeed have those advanced camera and sensor technologies – “May the Photons be with you my dear S7“.