When the new iPhones were first announced in September, Apple showed off the new ultrawide-angle camera, Night Mode and an improved selfie camera, all of which represented a significant step forward for iPhone photography and videos. And now that the new iPhones are in the wild, we’ve tested the cameras and can confirm their improvements as well as the absolute enjoyment we feel using that ultrawide-angle camera. There is one camera feature that Apple teased at its fall iPhone event that no one has gotten to try: Deep Fusion. While it sounds like the name of an acid jazz band, Apple claims the brand new photo processing technique will make your pictures pop with detail while keeping the amount of image noise relatively low. On Tuesday, Apple released Deep Fusion as part of the latest iOS developer beta for the iPhone 11, 11 Pro and 11 Pro Max.
The best way to think of Deep Fusion is that you’re not meant to. Apple wants you to rely on this new technology but not think too much about it. There’s no button to turn it on or off, or really any indication that you’re in the mode.
Right now, anytime you take a photo on an iPhone 11, 11 Pro or 11 Pro Max, the default mode is Smart HDR, which takes a series of images before and after your shot and blends them together to improve the dynamic range and detail. If the environment is too dark, the camera switches automatically into Night Mode to improve brightness and reduce image noise. With Deep Fusion, anytime you take a photo in medium to low light conditions, like indoors, the camera will switch automatically into the mode to lower image noise and optimize detail. If you’re using the “telephoto” lens on the iPhone 11 Pro or 11 Pro Max, the camera will drop into Deep Fusion pretty much anytime you’re not in the brightest light.
This means the iPhone 11, 11 Pro and 11 Pro Max have optimized modes for bright light, low light and now medium light. And I’d argue that most people’s photos are taken in medium to low light situations like indoors. The impact that Deep Fusion will have over your photos is enormous. It’s like Apple changed the recipe of Coke.
At the iPhone event, Apple’s Phil Schiller described Deep Fusion as “computational photography mad science.” And when you hear how it works, you’ll likely agree. Essentially anytime you go to take a photo, the camera is capturing multiple images. Again, Smart HDR does something similar. The iPhone takes a reference photo which is meant to stop motion blur as much as possible. Next, it combines three standard exposures and one long exposure into a single “synthetic long” photo. Deep Fusion then breaks the reference image and “synthetic long” photo into multiple regions like skies, walls, textures and fine details (like hair). Next, the software does a pixel-by-pixel analysis of the two photos — that’s 24 million pixels in total. The results of that analysis are used to determine which pixels to use in building a final image.
Apple says that the entire process takes a second or so to happen. But to allow to you to continue snapping shots, all of the information is captured and processed when your iPhone’s A13 processor has a chance. The idea is that you won’t be waiting on Deep Fusion before taking the next photo.
The release of Deep Fusion comes just a couple weeks before Google will formally announce the Pixel 4, its latest flagship phone in a line renowned for camera prowess.
I should note that Deep Fusion is only available on the iPhone 11, 11 Pro, 11 Pro Max, because it needs the A13 Bionic processor to work. I’m excited to try it out and share the results.