In the last chapter of our smartphone series, we left you hanging on two concepts: computer vision and computational photography. We know that those of you who have avoided computer sciences in school are about to close this window. It’s a pity, really, since computer vision and computational photography can be understood in a couple of minutes. In this chapter, you’ll get how cameras think and, by doing so, “see” them in a whole different light.
It all starts with the “eye”, the lens-aperture-sensor-ISP (image signal processor) combo that captures the information from a scene and interprets it. Everything that happens after the eye falls into the hands of the “cortex”. The “cortex” detects the information sent by the ISP, interprets and understands it so that one day it will be able to apply it to other images. This ability is called computer vision.
Think about a food picture – if you capture it with an analog, DSLR or mirrorless camera, you will likely store it somewhere, bound to be forgotten in two weeks. Take it with a smartphone though, and a special algorithm can make sense of your meal, revealing nutrition intake and monitoring your diet.
Let’s say you just spot a beautiful piece of clothing on a stranger. Capture it with your smartphone camera and apps like ASAP54 and INSPO can detect, scan and find where that item (or a similar one) is from.
Computer vision can also help your tablet’s camera to reveal the 3D structure of a room if you think about redecorating or enable refocus, background changes and object segmentation in an image through depth sensing.
Computational photography tricks
Why stop there? Your creativity, your need for change and progress made digital enhancement techniques necessary in the first place. These tools are now part of what specialists call computational photography.
What can they do for you? Besides adding bokeh to phone pictures, computational imaging techniques can use tone-mapping to create on the spot HDR (high dynamic range) images with increased dynamic range by 4 stops – this is what FotoMagic from FotoNation does. There’s more; if you want to refocus after taking a picture, algorithms can do that for you – just check Huawei P9 or Xiaomi Mi4’s features.
Smartphone cameras like Samsung Galaxy S7 and Sony Xperia Z3+ can tweak the colors and focus of food shots. Users can take flawless selfies with any OnePlus 3 phone that has Quick facial features enhancement and make-up application erases the few imperfections they believe they have.
To top it all off, HTC One M8 could remove objects after continuous photo shooting. That’s very hard to do with processing programs on computers, let alone with smartphone algorithms.
You can’t have one without the other
Well, just like the sensor can’t “see” color without an ISP, only some specs of light, computational photography can’t do its job without computer vision. I mean, how can you improve something without understanding it first?
You can’t have one without the other, like the Married with Children soundtrack said. If in the 2000s, the spec war fueled consumers’ interest and, thanks to them, it pushed the smartphone industry to new heights, now these post-ISP tools will reinvent it.
Why? Because from a certain point on, perfecting the “eye” is almost futile, as difficult as trying to travel at the speed of light. After a certain threshold, every infinitesimal percentage won adds more mass and requires more energy generation to propel it. You end up needing infinite energy to propel an infinite mass.
We’ll let you mull that over for a while… but not before revealing what’s next as the series progresses. We’ll talk about the ONE THING that could help phones’ cameras take better pictures without sacrificing processing power and battery life.
* This article is written as part of an editorial series presented by FotoNation.