Rahul Garg

I am a staff research scientist at Google, where I work on computer vision, machine learning and computational photography to build features and products that delight users. I have worked on Magic Eraser and Portrait Blur for Google Photos, Portrait Mode for Pixel Phones, Portrait Restore for Google Meet, Hand Tracking for AR/VR, and Face Videos for Google Picasa.

Before Google, I was one of the first employees at Flutter, a startup acquired by Google. I completed my PhD in Computer Science and Engineering at University of Washington. In past life, I spent four wonderful years at IIT Delhi where I got my Bachelors degree and graduated with President's Gold Medal.

Email  /  Google Scholar  /  LinkedIn

Selected projects:

Portrait Mode on Google Pixel 2
Example photos / p r e s s / SIGGRAPH'18 Paper

Synthesize shallow depth of field images on Google Pixel 2 phone using a single lens. Top rated camera phone of 2017 by DxO.

Learned Depth on Google Pixel 3
Example photos / p r e s s / ICCV'19 (paper | code) / ICCP'19 (paper | code)

A neural network trained using images from a custom five camera rig predicts better quality depth than stereo algorithms. Powers Portrait Mode on Google Pixel 3 and 3a.

Depth from Dual-Cameras and Dual-Pixels for Google Pixel 4
Example photos / p r e s s / ECCV'20 paper

Fuse complementary cues from dual-pixels and dual-cameras for high quality depth on Google Pixel 4.

Flutter: Hand Gesture Detection
Marketing video / p r e s s

Low power, real-time hand gesture detection for media control. Mac app rated 4.5+ stars and among Apple's best apps of the year.

Face Movies in Google Picasa
video / SIGGRAPH paper / CACM Cover Story

Automatically create face and expression aligned movies from photo collections. Unfortunately, Picasa is discontinued but user created movies still survive.


Legacy links: old homepage, class projects, flickr photos, programming contest problems

Template from Jon Barron.