Researchers from the University of California, Santa Barbara and Nvidia have worked on a new algorithm that allows photographers to modify key aspects of their composition after capture #fotomagic
The new technology is called computational zoom, “a framework that allows a photographer to manipulate several aspects of composition in post processing from a stack of pictures captured at different distances from the scene”. By doing so, it helps creatives create physically unattainable photos.
To do so, users first have to take pictures at the same, fixed focal length, “thus supporting widespread consumer devices such as camera phones.” Once that’s done, the computational zoom synthesizes the multi-perspective images. Such a composition gives control to the user, who can play with the sense of depth of the picture.
The framework is advanced enough to let photographers “modify FOV, the extension distortion, and the perspective of the image. […] it can simulate multi-perspective cameras that can image different depth ranges in the scene with different focal lengths.”, explained the team in a published paper. The cameras that are at your disposal right now could not achieve such results.
This incredible flexibility in building an artistic vision is what makes computational zoom promising in the first place. What’s even better is that the team isn’t keeping the findings to themselves. They intend to make them available to the general public in the form of software plug-ins.