When You Take a Great Photo, Thank the Algorithm in Your Phone


Not too long ago, tech giants like Apple and Samsung raved about the number of megapixels they were cramming into smartphone cameras to make photos look clearer. Nowadays, all the handset makers are shifting focus to the algorithms, artificial intelligence and special sensors that are working together to make our photos look more impressive.

What that means: Our phones are working hard to make photos look good, with minimal effort required from the user.

On Tuesday, Google showed its latest attempt to make cameras smarter. It unveiled the Pixel 4 and Pixel 4 XL, new versions of its popular smartphone, which comes in two screen sizes. While the devices include new hardware features — like an extra camera lens and an infrared face scanner to unlock the phone — Google emphasized the phones’ use of so-called computational photography, which automatically processes images to look more professional.

Among the Pixel 4’s new features is a mode for shooting the night sky and capturing images of stars. And by adding the extra lens, Google augmented a software feature called Super Res Zoom, which allows users to zoom in more closely on images without losing much detail.

Apple also highlighted computational photography last month when it introduced three new iPhones. One yet-to-be released feature, Deep Fusion, will process images with an extreme amount of detail.

The big picture? When you take a digital photo, you’re not actually shooting a photo anymore.

“Most photos you take these days are not a photo where you click the photo and get one shot,” said Ren Ng, a computer science professor at the University of California, Berkeley. “These days it takes a burst of images and computes all of that data into a final photograph.”

Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.

Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.

READ ALSO  Sphero Scores with Two Robot Kits to Hone Kids' STEM and Soccer Skills

Google gave me a preview of its Pixel phones last week. Here’s what they tell us about the software that’s making our phone cameras tick, and what to look forward to. (For the most part, the photos will speak for themselves.)

Last year, Google introduced Night Sight, which made photos taken in low light look as though they had been shot in normal conditions, without a flash. The technique took a burst of photos with short exposures and reassembled them into an image.

With the Pixel 4, Google is applying a similar technique for photos of the night sky. For astronomy photos, the camera detects when it is very dark and takes a burst of images at extra-long exposures to capture more light. The result is a task that could previously be done only with full-size cameras with bulky lenses, Google said.

Apple’s new iPhones also introduced a mode for shooting photos in low light, employing a similar method. Once the camera detects that a setting is very dark, it automatically captures multiple pictures and fuses them together while adjusting colors and contrast.

A few years ago, phone makers like Apple, Samsung and Huawei introduced cameras that produced portrait mode, also known as the bokeh effect, which sharpened a subject in the foreground and blurred the background. Most phone makers used two lenses that worked together to create the effect.

Two years ago with the Pixel 2, Google accomplished the same effect with a single lens. Its method largely relied on machine learning — computers analyzing millions of images to recognize what’s important in a photo. The Pixel then made predictions about the parts of the photo that should stay sharp and created a mask around it. A special sensor inside the camera, called dual-pixel autofocus, helped analyze the distance between the objects and the camera to make the blurring look realistic.

With the Pixel 4, Google said, it has improved the camera’s portrait-mode ability. The new second lens will allow the camera to capture more information about depth, which lets the camera shoot objects with portrait mode from greater distances.

READ ALSO  What To Do About CDA Section 230 And ISP Immunity?

In the past, zooming in with digital cameras was practically taboo because the image would inevitably become very pixelated, and the slightest hand movement would create blur. Google used software to address the issue last year in the Pixel 3 with what it calls Super Res Zoom.

The technique takes advantage of natural hand tremors to capture a burst of photos in varying positions. By combining each of the slightly varying photos, the camera software composes a photo that fills in detail that wouldn’t have been there with a normal digital zoom.

The Pixel 4’s new lens expands the ability of Super Res Zoom by adjusting to zoom in, similar to a zoom lens on a film camera. In other words, now the camera will take advantage of both the software feature and the optical lens to zoom in extra close without losing detail.

Computational photography is an entire field of study in computer science. Dr. Ng, the Berkeley professor, teaches courses on the subject. He said he and his students were researching new techniques like the ability to apply portrait-mode effects to videos.

Say, for example, two people in a video are having a conversation, and you want the camera to automatically focus on whoever is speaking. A video camera can’t typically know how to do that because it can’t predict the future. But in computational photography, a camera could record all the footage, use artificial intelligence to determine which person is speaking and apply the auto-focusing effects after the fact. The video you’d see would shift focus between two people as they took turns speaking.

“These are examples of capabilities that are completely new and emerging in research that could completely change what we think of that’s possible,” Dr. Ng said.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com