CVPR Daily - Wednesday
The problem with this approach is that you do not know what the network used as a feature for classification, so you will not know whether these explanations can be trusted and if they are faithfully explaining the model. “ In contrast to common approaches, we’ve designed and trained a deep neural network that is already optimized for interpretability , ” Moritz explains. “ It’s inherently interpretable so that you can easily obtain an explanation for the model decision. We did that in the context of image classification. We trained on the ImageNet dataset and managed to optimize the network in terms of architecture and the training paradigm so that it will align its matrix with important features in the input . ” This framework can take existing architectures and make them inherently more interpretable while maintaining the original classification accuracy for the most part. All the code is publicly available , and experiments can be reproduced. In terms of future development, Moritz says they are already working on a follow-up to this work, showing that this approach can be integrated into vision transformers to make them inherently interpretable. We suggest this could be a candidate for ECCV later this year. “ Let’s see where we can get it! ” he responds. “ In the future, I want to explore if we can use what we learned during this process to make natural language processing, audio processing, or other tasks more interpretable . ” Moritz presented this work at the XAI4CV: Explainable Artificial Intelligence for Computer Vision Workshop on Monday, a new workshop organized by Meta AI , demonstrating the high interest in this area now. It proved very popular. 7 DAILY CVPR Wednesday Moritz Böhle
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=