How the best smartphone cameras are now propelled by custom silicon not the best optics

warpcore
5 min readOct 24, 2019

2019 is proof that the best camera phones don’t necessarily have the best camera optics, but they do come with custom silicon which is allowing for best in class image processing thanks to the increasing importance of computational photography. The three best camera phones of 2019 — the iPhone 11 Pro, the Huawei Mate 30 Pro and the new Google Pixel 4 all come with custom processing hardware which is exclusive to them. However, these are also the phones which don’t use the industry-standard Sony IMX 586 48-megapixel sensor which according to many product experts is the best optical hardware available for a smartphone. In fact, if one looks at it, the iPhone 11 and the Google Pixel 4 are using rather a humble camera hardware, leveraging rather pedantic sounding 12-megapixel sensors, and by modern standards, they are even using fewer cameras. On the other hand, Huawei uses a completely custom optical solution which has been designed in tandem with Zeiss and tuned on its own Kirin chipset which makes its solution unique. So it is clear that the secret sauce is in the custom silicon that these phones are trotting.

What are these custom processors doing

  1. Apple A13 Bionic: The iPhone 11 Pro has Apple’s proprietary A13 Bionic chipset which is just used by the iPhone and will never be made available on any other smartphone. It comes with a neural engine, two machine learning accelerators, in addition to having the fastest GPU and CPU on a smartphone. On the A13 bionic a lot of the custom processing which includes the portrait mode, the deep fusion technology which is doing multi-frame rending technology and the new night mode using the custom silicon. Apple has trained machine learning algorithms which are working on device leveraging the neural engine which is 15% faster this year and the new machine learning accelerators. It’s this ascendency that allows the iPhone 11 cameras to smartly maintain colour science across lenses, enhance the smart HDR processing which gives users are more balanced scene and also process 4K video at 60 frames per second and fire twin cameras at the same time. The deep fusion technology is enabling DSLR levels of detail on a gadget that’s compact by even modern smartphone standards. The iPhone’s portrait mode is also enhanced, especially on the standard iPhone 11 as now it adds support for understanding all kinds of subjects and the fact that Apple is able to get a realistic depth map mimicking a natural bokeh without a time of flight sensor on the back speaks volumes for the image processing pipelines and machine learning algorithms on the iPhone 11. Apart from its incredible image processing capabilities, Apple also arms the device with some powerful editing features for both video and photos. Especially, the new video editing features are enabled by the new chip as it is nearing desktop levels of performance something unheard of in mobile chipsets.
  2. Google Pixel Neural Core: The Pixel 4 uses a standard off the shelf Qualcomm Snapdragon 855 processor but it is augmented by Google’s Pixel Neural Core chip which enables the live HDR+ feature, it enables the computational photography and is basically the stove that cooks the Pixel secret sauce. In previous Pixel phones, users never got a live preview of either the HDR+ result or the portrait mode photo, but now live previews are enabled. Part of the problem was that the Snapdragon processors were never able to do that level of heavy lifting something users have been accustomed to on the iPhone for years — so the Neural Core facilitates that. Like the older Pixel Visual Core chip, it is also home to on-device machine learning algorithms and training sets which optimises the Pixel 4 camera system for subjects optimising the colour. In fact, the learning-based white balancing system Google introduced with the night sight now runs across all modes partly thanks to the Pixel Neural Core which is home to the new white balancing models. Similarly, Google’s gains in the night sight mode and super-resolution zoom are also enabled by a lot of the on-device machine learning on the Neural Core chip. Google is also using the telephoto camera for portrait mode with the Pixel 4 and now is offering a more realistic blur which can be handed to fresh training sets running on the Pixel Neural Core.
  3. Huawei Kirin 990: Huawei’s Mate 30 Pro also comes with its state of the art Kirin 990 processor which integrates custom NPU and CPU which does on-device processing that enables the voracious camera capabilities of this device. The chip is so powerful that is also powers the custom camera hardware on this phone. Huawei’s 40-megapixel RYYB camera sensor isn’t just unique from a pixel colour arrangement, but it is also easily one the largest sensors to have been put on a smartphone. Furthermore, it is paired with a wide-angle, telephoto and time of flight (ToF) camera, making it one of the most sophisticated camera systems from a hardware perspective. The new dual ISP on the Kirin 990 does insane levels of noise reduction with block-matching and 3D filtering which is on par with DSLR cameras. This ISP is what enables 7,680fps slow motion on the phone which is just unheard of as Android smartphones are scrambling to be able to run 960fps while the iPhone maxes out at 240fps. This chip also enables an ultra low-light mode for even video which hasn’t been seen before on a smartphone. The neural core on the Kirin 990 also comes with trained AI-RAW algorithms that use artificial intelligence to optimise demosaicing from the quad Bayer to RGB images. Its new AI processing also enables better stabilisation at medium levels of zoom which the Mate 30 Pro maxes out at 3x. Huawei is using its latest training set for the night mode which enables the Mate 30 Pro the brightest night mode of all the three phones, but it isn’t the most representative of reality.

Other than this, there is even the Peta camera Nokia 9 which is using a custom light chip to power the five cameras on it which provide incredible resolution and unmatched levels of depth.

We are fully in the age of computational photography — and in this age, packing the best camera hardware means having the best camera processing silicon backed up with software that can resolve the best images and video. Having the best optics isn’t good enough and having the most cameras doesn’t necessarily mean you’re going to end up having the best smartphone camera. It is all in the processing — that starts with a custom chip which run the computational algorithms and magically turn the limited light captured by the small smartphone camera sensors into something that DSLRs would capture and sometimes surpass it. It is a case of physics being usurped by bits.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

warpcore
warpcore

Written by warpcore

Serving communities on the intersection of technology, indie music and culture, the warp core is a think tank founded by technology journalist Sahil Mohan Gupta

No responses yet

Write a response