Tech

Computational photography can make photos better?

0
Deep fusion applies to all cameras, and uses advanced machine learning to process the photos pixel by pixel, and optimize the details, textures and colors of each part of the photo.

With the iPhone 12 series released, computational photography seems to have become a buzzword.

What is computational photography?

Computational photography refers to digital image capture and processing techniques that use digital calculations instead of optical processes.

Although this is nothing new, this technology has been perfected in the past few years to make smartphones in the camera department better. It means that users can more easily produce beautifully without knowing the technical part of photography.

iPhone 12 places photography and video at the forefront

The new LiDAR scanners on iPhone 12 Pro and iPhone 12 Pro Max enable night mode portraits and a more realistic AR experience.

For example, in the iPhone 12 series, Apple has made many improvements in the field of image processing with its advanced A14 Bionic chip and new image signal processor (ISP).

This can improve image quality and achieve powerful computational photography functions that cannot be achieved with traditional cameras.

However, traditional cameras do have certain advantages over smartphones, especially in terms of image sensor size and lens selection.

However, with the advancement of artificial intelligence, Smart HDR, Deep Fusion and Light Detection and Ranging (LiDAR) scanner, smartphones will be able to compete with traditional cameras.

For example, the iPhone 12 Pro series is equipped with three cameras-a wide-angle camera with ƒ/1.6 aperture; and an ultra-wide lens with a 120-degree field of view; and a 52mm focal length telephoto camera with a maximum 5x optical zoom function.

The example of the photos:

The iPhone 12 Brings an Unparalleled Camera Experience - Exibart Street

Through these functions, combined with the aforementioned computational photography technology, smartphones such as the iPhone 12 have advantages.

For example, improvements to the night mode (now extended to TrueDepth and Ultra Wide cameras) make photos brighter.

When used with a tripod, the night mode delay can provide a longer exposure time, resulting in a clearer video, better light trails and smoother exposure in low light environments. Nowadays, all cameras use better and faster deep fusion technology. With the new Smart HDR 3, users can expect more realistic images even in complex scenes.

What is LiDAR?

LiDAR scanners in the Pro series can measure light distance and use the pixel depth information of the scene. This technology can provide a faster, more realistic augmented reality (AR) experience. It increase autofocus by 6 times in low-light scenes to improve accuracy and reduce photo and video capture time. This advanced hardware combined with the powerful features of the A14 Bionic neural engine can also unlock night mode portraits. It helps to present a beautiful low-light bokeh effect, which was once a feature of expensive large-aperture lenses.

In addition, Apple will launch Apple ProRAW later this year. This technology combines Apple’s multi-frame image processing and computational photography with the versatility of the RAW format.

This means that users can experience full creative control over color, detail, and dynamic range on the iPhone or other professional photo editing applications.

More Information about:

 

Content Source:

#TECH: Can computational photography make photos better?

 

Russian Soyuz MS-17 launched

Previous article

Do not let political turmoil return during the COVID-19 pandemic – Malaysia’s King

Next article

You may also like

Comments

Leave a Reply

More in Tech