Computer Vision News - July 2022

BEST OF CVPR 12 CVPR Poster Presentation Deep neural networks are powerful and work well in classification and many other tasks, but extracting faithful explanations for the model’s decisions is typically much more challenging. These explanations are needed to increase trust in machine learning-based systems and to learn from the systems because we might not know the important feature of some datasets. This is especially true for medical data. If systems are interpretable by design and highly accurate in their classification, then there is the chance to understand better which features are indicative of, for example, a particular disease. “The most common approach is to start with a pre-trained network ,” Moritz tells us. “You’re agnostic to how it was trained. You know it’s a network for classification, for example, and then people try to explain it after the fact in a post hoc fashion. If you find a good explanation method that works for all networks, you maintain the classification accuracy and can explain any network.” The problem with this approach is that you do not know what the network used as a feature for classification, so you will not know whether these explanations can be B-COS NETWORKS: ALIGNMENT IS ALL WE NEED FOR INTERPRETABILITY Moritz Böhle is a fourth-year PhD student at the Max Planck Institute for Informatics, under the supervision of Bernt Schiele. He speaks to us ahead of his poster, which explores a novel direction for improving the interpretability of deep neural networks.

RkJQdWJsaXNoZXIy NTc3NzU=