ECCV 2020 Daily - Wednesday

3 Rico Jonschkowski 5 “The literature on unsupervised optical flow is pretty diverse,” Rico tells us. “There are many complex methods with many parameters. Tweaking everything right is hard and running experiments is costly. Every method has different components and uses a different combination. The problem is we don’t know which of the basic components are best if you put them together. That’s the topic of the paper. We look at all of them, run experiments, and try to find the best method that combines the best simple components.” As you can see in Rico’s presentation video, they have come up with a simple model, having performed a thorough analysis of different components to gain insights into what does and does not work. Other works train flow together with depth estimation and ego-motion or do multi-frame estimation and become very complex. This work does not do any of those things. In the real world, flow allows you to see motion, so any task where motion is useful should benefit from this additional input, such as activity recognition or objects tracking . Enabling robots to perform dynamic tasks should be easy if you can directly see motion. Where it cannot just be used as an input, pixel- to-pixel correspondence can be used to bootstrap and learn other representations from it. You can track the pixels over time which is a very powerful thing. Although Rico is playing his cards close to his chest when it comes to next steps, he is clear that this paper, especially the open source element, will open avenues for future works. Ultimately, it is a simple Watch Videos DAILY W e d n e s d a y

RkJQdWJsaXNoZXIy NTc3NzU=