CVPR Daily - Wednesday
Deep neural networks are powerful and work well in classification and many other tasks, but extracting faithful explanations for the model’s decisions is typically much more challenging. These explanations are needed to increase trust in machine learning-based systems and to learn from the systems because we might not know the important feature of some datasets. This is especially true for medical data . If systems are interpretable by design and highly accurate in their classification, then there is the chance to understand better which features are indicative of, for example, a particular disease. “ The most common approach is to start with a pre-trained network , ” Moritz tells us. “ You’re agnostic to how it was trained. You know it’s a network for classification, for example, and then people try to explain it after the fact in a post hoc fashion. If you find a good explanation method that works for all networks, you maintain the classification accuracy and can explain any network. ” Moritz Böhle is a fourth-year PhD student at the Max Planck Institute for Informatics, under the supervision of Bernt Schiele. He speaks to us ahead of his poster today, which explores a novel direction for improving the interpretability of deep neural networks. B-Cos Networks: Alignment Is All We Need for Interpretability 6 DAILY CVPR Wednesday Poster Presentation
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=