Computer Vision News - November 2023

Computer Vision News 14 by Robin Hesse Today’s strong performance of deep neural networks coincides with the development of increasingly complex models. As a result, humans cannot easily understand how these models are working, and therefore, only have limited trust in them. This renders their application in safety-critical domains such as autonomous driving or medical imaging difficult. To counteract this issue, the field of explainable artificial intelligence (XAI) has emerged which aims to shed light on how deep models are working. While numerous fascinating methods have been proposed to improve the explainability of vision systems, the evaluation of these methods has often been limited by the absence of ground-truth explanations. This issue naturally leads to the lingering question: “How to decide which explanation method is most suited for my specific application?”, which is the motivating question for our work. To answer this question, so far, the XAI community resorted to proxy tasks to approximate ground-truth explanations. One popular instance are feature deletion protocols where pixels or patches are incrementally removed to measure their impact on the model output and approximate their importance. However, these protocols come with several limitations, such that they introduce out-of-domain issues that could interfere with the metric, they only consider a single dimension of XAI quality, and they work on a semantically less meaningful pixel or patch level. The last point is especially important considering that explanations aim to support humans, and humans perceive images in semantically meaningful concepts rather than pixels. Robin Hesse is a third-year PhD student at the Technical University of Darmstadt under the supervision of Simone Schaub-Meyer and Stefan Roth. In his research, he works on explainable artificial intelligence with a particular interest in intrinsically more explainable models and how to evaluate explanation methods. FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods ICCV Oral Presentation

RkJQdWJsaXNoZXIy NTc3NzU=