Modern smartphone photography is as much a product of what the image sensor captures as it is of what an image processor does with the information afterwards. Xiaomi has presented a paper that aims to solve a common issue of small pixels limited dynamic range by employing an AI to fix the exposure of the photo.
But not the whole photo, instead the image is segmented into sub-images and their exposure is adjusted separately. Then the different parts are merged to form the final image.
Take this image for example. The buildings are white, the clouds are white and the overexposed sky is white. The AI, dubbed DeepExposure, does a great job at restoring detail.
The schematic illustration of the algorithm. Firstly, it harnesses image segmentation to obtain sub-images. For different sub-images, it uses different exposures according to the policy network and they are fused together to form the final high-quality image
To teach the AI (based on a Generative adversarial network), the Xiaomi team used the images from the MIT-Adobe FiveK dataset. It contains unedited RAW photos as well as the same photos retouched by five experts (the team used 3,000 images, picking the ones retouched by Expert C).
The network works on low-resolution images its goal is to come up with the best parameters for classic image filters. Think of it as the AI fiddling with the dials in Lightroom. This simplifies the learning process, but should also speed up image processing.
Retouched images of different algorithms. From left to right, top to bottom: Original input image, DeepExposure I, DeepExposure II, Exposure, FI, Expert C, DPED, CycleGAN, Deep Photo Enhancer and Deep Guided Filter
Source (PDF) | Via