Computer Vision News - August 2019
Research Every month, Computer Vision News reviews a research paper from our field. This month we have chosen Learning the Depths of Moving People by Watching Frozen People. We are indebted to the authors from Google Research ( Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, William T. Freeman ) for allowing us to use their images. The paper won a Honorable Mentioned Award at CVPR 2019. Structure from Motion addresses the problem of reconstructing the 3D structure from a motion of a camera. Most of the existing theory only refer to the case where the scene is static, i.e. there is no object movement between frames. The current state of the art pipelines treats moving objects as noise and try to reconstruct only the static part of the scene. This approach raises problems for applications that generate a dense depth map with a moving object, for example computing the distance to moving people in an autonomous car. The paper we review (by Google Research) tackles this problem with an original idea. The authors train a network to predict a dense depth map using Mannequin Challenge videos. This challenge videos were a popular trend in 2016 when people captured videos of themselves freezing while the camera moved between them. By leveraging this special structure, the authors were able to generate a dense depth maps that enable to perform dense depth estimation at inference time. 4
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=