Daily CVPR - Tuesday

CVPR Daily: Tuesday Highlights 5 Danna Gurari (University of Texas at Austin) presented the work conducted by a team of four: Pull the Plug? Predicting If Computers or Humans Should Segment Images. Algorithm work wonderfully when they work, but fail miserably. The starting point of the project is the need to efficiently allocate human intervention in an object segmentation task. When the choice is open between human, automatic and semiautomatic segmentation, Danna and her team have developed a predictive model enabling to automatically decide without a human review when an algorithm does well and when a human expert is needed. A full review of their work have been published in Computer Vision News of June and can be found here at pages 17-19. The mapping requires navigation maps with different localization resolutions (from 10m to 10cm accurate). Mapping will be a continuous process and data may be collected in a crowd-sourcing way (each car providing one piece of map data). The driving policy part requires learning to interpret the intention of other car agents and act accordingly. Deep reinforcement learning will be the tool to tackle this task. A timeline of autonomous driving with milestones is proposed: 2015-2017: Highway autopiloting 2018-2020: Highway autonomous driving 2021: Part of urban area autonomous driving 2023: Self-driving EVERYWHERE Amanda Song (PhD@Cognitive Science@UCSD), decided to take notes for us at Amnon Shashua’s Three Pillars of Autonomous Driving : sensing, mapping and driving policy are the 3 technological pillars of autonomous driving system. The sensing parts will utilize inputs from cameras, ridars and lidars to construct an environmental model which can detect moving and stationary objects, drivable paths and boundaries (lanes).

RkJQdWJsaXNoZXIy NTc3NzU=