Google details how ‘Top shot’ camera feature on Pixel 3 works


Google with the Pixel 3 series introduced new camera feature dubbed Top Shot that helps you to capture moments precisely. Top Shot saves and analyzes the image frames before and after the shutter press on the device in real-time using computer vision techniques, and recommends several alternative high-quality HDR+ photos.

Google says that each image is analyzed for some qualitative features in real-time and entirely on-device to preserve privacy and minimize latency. Each image is also associated with additional signals, such as the optical flow of the image, exposure time, and gyro sensor data to form the input features used to score the frame quality.

Once the shutter button is pressed, Top Shot captures up to 90 images from 1.5 seconds before and after the shutter press, selecting up to two alternative shots to save in high resolution. The shutter frame is processed and saved first and the best alternative shots are saved afterward. Google’s Visual Core on Pixel 3 is used to process these top alternative shots as HDR+ images with a very small amount of extra latency, and are embedded into the file of the Motion Photo.

Google Top Shot

Since the Top Shot runs in as a background process, it has very low power consumption. Top Shot uses a hardware-accelerated MobileNet-based single shot detector (SSD). While Top Shot prioritizes for face analysis, there are good moments in which faces are not the primary subject. To handle those use cases, Google includes Subject motion saliency score, Global motion blur score, “3A” scores.

Source