Computer Vision News - July 2022

BEST OF CVPR 13 Moritz Böhle trusted and if they are faithfully explaining the model. “In contrast to common approaches, we’ve designed and trained a deep neural network that is already optimized for interpretability ,” Moritz explains. “It’s inherently interpretable so that you can easily obtain an explanation for the model decision. We did that in the context of image classification. We trained on the ImageNetdataset andmanagedtooptimize the network in terms of architecture and the training paradigm so that it will align its matrix with important features in the input .” This framework can take existing architectures and make them inherently more interpretable while maintaining the original classification accuracy for the most part. All the code is publicly available , and experiments can be reproduced. In terms of future development, Moritz says they are already working on a follow- up to this work, showing that this approach can be integrated into vision transformers to make them inherently interpretable. We suggest this could be a candidate for ECCV later this year. “Let’s seewherewe canget it!” he responds. “In the future, I want to explore if we can use what we learned during this process to make natural language processing, audio processing, or other tasks more interpretable.” Moritz presented this work at the XAI4CV: Explainable Artificial Intelligence for Computer Vision Workshop on Monday, a new workshop organized by Meta AI , demonstrating the high interest in this area now. It proved very popular.

RkJQdWJsaXNoZXIy NTc3NzU=