In a nutshell, Apple’s explanation of Deep Fusion is that it’s “a new image processing system enabled by the Neural Engine of A13 Bionic. Deep Fusion uses advanced machine learning to do pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo.”
Multiple photos are taken and analyzed at the pixel level to create a highly detailed image in about one second. When it comes to images with textures, such as sweaters, greater detail and clarity is evident.
Many people have started sharing some images of Deep Fusion now that iOS 13.2 beta is out.
Some samples were shared by Halide camera app co-founder Sebastiaan de With, detailing an iPhone 11 Deep Fusion image (iOS 13.2 beta) versus a regular iPhone 11 Pro image (iOS 13.1.2)
Check out some more photos below in the embedded tweet:
In my first tests, Deep Fusion offers fairly modest gains in sharpness (and much larger files — my HEICs came out ~2x bigger). pic.twitter.com/ISclMKT1hK
— Sebastiaan de With (@sdw) October 2, 2019
Photographer Tyler Stalman also shared pictures taken by an iPhone 11 with Deep Fusion and compared them to those taken by an iPhone XR, a good comparison for those considering a camera upgrade to iPhone 11:
— Tyler Stalman (@stalman) October 2, 2019
Deep Fusion file sizes are about double compared to regular images, depending on the subject, and pixel sizes are the same.
Are you running iOS 13.2 beta right now? How are your tests of Deep Fusion so far?