In the first chapter of our smartphone series, we went back in time to see how the camera phones came to be. We explored the impact camera phones, turned into smartphones, had on social networks, as well as the beginning of the megapixel race. Now, we are going to debunk the myth; we’ll see why a bigger megapixel number doesn’t necessarily translate into a camera that takes better pictures.
The megapixel sprint of the 2000s left a mark on many. Consumers are still looking for a magical number of MP in spec sheets in order to decide whether a smartphone is worth buying or not. They still believe that “more is better” when the truth is, “less is more”.
How pixels capture light
Imagine a puzzle. Each puzzle piece waiting to be put in place is a megapixel, while the outline of the puzzle is the sensor (by the way, 1 megapixel = 1 million photodetectors). You can have an “easy” puzzle, with few big pieces needed to complete it, or a more complex one where you’re given lots of small pieces.
Now remember, each piece is a megapixel and no matter the count, light goes through the lens and covers the entire puzzle, here known as the sensor area. So, if the sensor is made from lots of small megapixels, each pixel will be covered by a tiny amount of light (proportional with its size). In return, the sensor-puzzle will detect less light than it normally would and the risk of added noise will increase.
Therefore, bigger pixels are preferred, even if they are few in number, since each of them can collect more light. Unfortunately, smartphone manufacturers aren’t ready to invest in such sensors and they don’t want to increase the phone thickness. So, to compensate for poor images, they rely heavily on software.
With iPhone 7 Plus, Apple has managed to recreate an effect that was usually out of reach for smartphones. It used two cameras and a depth map to achieve the bokeh effect in software. That’s because smartphones don’t have the hardware necessary to make a blurry background like DSLR cameras do. They can’t obtain a shallow depth of field, although on paper, the wide apertures should do the trick. The aperture, as you know, is the hole placed before the sensor, the one that lets the light come through.
How to go from light to color
Photodetectors are color-blind, they perceive only shades of gray. But when they’re covered with RGB filters, bam! – each photodetector records one red, green or blue pixel. To avoid getting a puzzle of colored pixels arranged in the exact same order from one corner to the other, the camera calls in an expert, the image signal processor. This guy knows where and how everything should go.
It analyzes the color and brightness of each pixel, then compares it with its neighbors and estimates the overall color of the scene. This result isn’t the finished photo, but the base for another software algorithm that produces the RAW picture.
Before you get lost in technicalities, the crucial thing to remember is that the image signal processor (ISP) guy is the key to unlocking the picture. It not only translates light into color, but it also fine-tunes the focus, exposure and white balance like a DJ turning knobs at his mixer. Also, the ISP reduces noise and exports the photo in JPG.
There are premium smartphones that can save the picture in RAW… and that’s cool if you love to tinker with your toys. Otherwise, save time and storage space by editing them after JPG export.
In the end, it’s all a balancing act. You have to pair the highest number of megapixels with a reasonable pixel size (the closer it gets to 2 microns, the better) and a large sensor. Since it’s easier said than done, manufacturers compensate with post-processing – more about this in our next chapter.
* This article is written as part of an editorial series presented by FotoNation.
Follow TechTheLead on Google News to get the news first.