ECCV 2016 Daily - Friday

Deep networks have excelled on a variety of vision tasks, but they are generally opaque and do not provide any insight as to why a particular output is appropriate. Unlike opaque systems, systems that can explain their output are more likely to be trusted by a user [1]. In their work Generating Visual Explanations Lisa Anne Hendricks and her team propose a method to output justification text to help explain a deep network's output. The visual explanations generated by their model describe elements in an image which are class discriminative, and thus help us understand why a particular classification decision is appropriate for a particular image. Generating Visual Explanations 18 ECCV Daily : Fr iday Presentations Lisa Anne Hendricks is a third year graduate student at UC Berkeley studying computer vision with Professor Trevor Darrell. Before, she was an undergraduate at Rice University. Lisa Anne was already our guest at the latest CVPR , in Las Vegas, where she told us about Deep Compositional Captioning.

RkJQdWJsaXNoZXIy NTc3NzU=