in

Computational Photography explained: How software processing overtakes hardware

Artificial Intelligence and Machine learning is changing the way we take photos. See how it improved over time to even challenge benefits offered by better hardware.

In this modern world, a smartphone is expected to replace most of the electronic gadgets we own. Even if the replacement isn’t as perfect as the the original one. Smartphones have replaced a calculator, audio processing instruments & cameras. But we will be talking about smartphone camera technologies here.

With the advanced processing power, smartphones can now potentially replace DSLR cameras upto a certain extent. It is inevitable. No one is fond of taking a heavy camera when they have almost similar quality camera in their pocket. Most of them doesn’t care about the RAW sensor data at all – all they want is a social media worthy photo, crisp & clear.

But there’s more to it. Smartphones are now coming with multiple cameras. There were dual cameras, triple & quad camera (First one was Samsung A9) & now we have a penta camera Nokia 9 Pureview.

The software has also improved a lot during the last few years. The AI based processing detects the scene and automatically optimizes color,contrast & sharpness of the image. Huawei introduced Hi Resolution Lossless Digital Zoom in Mate20 Pro. The Mate20 Pro was a pocket camera beast. Xiaomi introduced Moon mode in its flagships.

All these phones had more than 1 camera. So the breakthrough was Google Pixel series. The Pixel smartphones have just a single camera yet it overtakes most of the other multi sensor camera phones. How does this happen?

The answer is simple, Google’s expertise in Artificial Intelligence & Machine Learning has helped it to design a camera app – Google Camera or simply abbreviated as GCam.

google camera

GCam essentially takes an image in flat color profile & using Google’s image processing engine – color grades it. The advantage of shooting a flat is that it overcomes the problem of blown away highlights & crushed shadow details. It is relatively easy to recover details from shadows but highlight recovery is difficult.

For low light photos GCam takes multiple photos and stacks them together and applies noise reduction to the photo. The result is a photo which is brighter & though not very crisp, it is usable to a certain extent.

 

Other manufacturers are also replicating this techniques along with some of their proprietary algorithms for better image quality. In addition, many third party developers are porting (rewriting) GCam to other devices as well.

The first DSLR like feature is the shallow depth of field effect – simply known as Bokeh blur. There are different ways to achieve this.

Method 1 is to use software algorithms to detect objects in focus (or objects that should be, in focus) & blur out remaining part of image. This is done by Google in their Pixel phones, since they have only one camera sensor. Method 2 is to use a secondary sensor for detecting depth of field and distinguishing object and background. The data from this sensor is used to determine which part should be in focus. Technically the second method should be giving better results.

Turns out its not! Yes, Google’s method is actually doing better job at edge detection than a dedicated depth camera on most devices. This clearly shows the superiority of AI algorithms over hardware enhanced methods.

HDR photos are also the new trend. HDR photos look similar to what we see with our eyes – the dynamic range is far better than others. HDR is made capable by stacking. Multiple photos are taken & stacked together & highlights are merged to get that awesome looking landscape shot.

hdr image stacking

In terms of shooting video, the realtime software processing can only be done in a limited way. Video processing is limited to EIS (Electronic Image Stabilisation) which reduces jitters & motion blur during video shoot. EIS uses the chipset processing power to detect shaky movement & then crops off the frames at the edges to cancel out motion. Although EIS is inferior compared to OIS – it comes handy.

Samsung has recently introduced live focus tracking in videos, a feature which could only be done is professional DSLRs to focus on the subject while it is motion. This does require a lot of processing though.

Read more on our Google Pixel 3A Review – The budget Pixel trolled by its own price tag

With Google Pixel 4 & Samsung Galaxy Note 10 around the corner, it will be interesting to know what camera technologies are incorporated in them. Pixel 4 is said to finally have a 2nd camera sensor (an ultrawide), while Galaxy Note 10 is touted to house a triple camera setup with a ToF (time of flight) sensor to have real time background blue effects.

Computational Photography technology is going to be better and better in the coming days, and we surely look forward to getting more AI based enhancements to camera apps. What are your thoughts on this? Do share in the comments below.
Top post on IndiBlogger, the biggest community of Indian Bloggers

What do you think?

132 points
Upvote Downvote

Written by TechBuzzIn Mediaworks

Tech News & Gadget reviews powered by TechBuzzIn Mediaworks.
Propagating the best of technology. Always.
Estd. 2016 as a startup tech news agency.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

0

Comments

0 comments

philips trimmer online

Philips Trimmer – When overgrown beards aren’t your best friend

Tech Terms Every Entrepreneur Should Know

Tech Terms Every Entrepreneur Should Know