The dual camera system that was introduced with the iPhone 7 Plus began to blur the lines of what was possible with the images captured with a cell phone. The phrases “portrait mode” and “depth effect” are as commonplace as dual camera setups are; in fact, it’s almost become a staple of any new phone that is released. But how well do these de-focus algorithms perform against a true large sensor camera?
Marques Brownlee recently set out to answer the question that most of us have been scratching our heads about, “How Does Portrait Mode Work?“. Pitting an iPhone X, Note 8, Pixel 2 against the Hasselblad X1DÂ Brownlee came with interesting results.
The sensor in the Hasselblad X1D is about 80x bigger than the phones that Brownlee compares it against. The level of defocus and the quality of the defocused areas, or bokeh, is natural as the focal plane shifts. Since the lenses on the iPhone X, Note 8, Pixel 2 are so wide, most things are already in focus unless you get extremely close.
The strength of “portrait mode” on any device running iOS or Android is the ability to recognize faces. As humans, we are hard-wired to recognize faces. So, the algorithms in the camera app tend to pay special attention to faces forgoing all other details including hair, ears, or other objects in the same focal plane.
It is interesting to see how far the technology has progressed in a little over a year. The untrained eye, and to most trained ones, images produced with the dual cameras and defocus algorithms are almost indistinguishable from larger sensored cameras.
We have a review coming out of an App for iPhone that approached de-focused areas of an image via a depth map entirely differently though, and you may be very surprised at the results.