Computer Vision News - October 2018
Nan Yang is a PhD student and computer vision engineer at Artisense, who are part of the ECCV 2018 Expo. He spoke to us ahead of his oral presentation and poster session - Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry, about implementing deep learning into the classical computer vision problem called SLAM or visual odometry. Deep learning has swept through many areas in computer vision, but there is still one field in computer vision where deep learning methods cannot compete with classical methods, which is SLAM or visual odometry . Nan’s goal is to use deep learning to improve the classical visual odometry system. Nan says that currently, the research direction for deep learning is that a lot of people are trying to solve this problem in an end-to-end manner, by feeding it some images and hoping that it can give a good result without knowing very much about the process. He says he takes the advantages of deep learning, but also knows the advantages of the classical SLAM part, so can combine the two. He explains further: “ My approach is to use monocular depth estimation to estimate the depth from one single image using deep learning, and we integrate this into a visual odometry system. There’s a problem using visual odometry system with a single camera, because performance is not so good. Using only one single camera the depth estimation is not very reliable. Our technique is to use deep learning to get the estimate of the depth, which can give us the metric scale of the world. ” What are the real-world applications? Till Kaestner , co-founder of Artisense , tells us: “ It enables systems for autonomous vehicles to navigate without GPS with very high accuracy and robustness .” Nan says that for now, the system focuses more on the checking part: the estimate, the car, the poses of the car. The next steps will be to integrate this process with a global mapping technique. For example, with global mapping in the cloud they can integrate this local mapping or local estimation of the poses into the cloud- based global map. Then every autonomous driving car can know exactly where it is in the real world, without drift. Nan Yang Daily Tuesday 32 Oral Presentation
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=