Computer Vision News - February 2022
4 Best Paper Award at WACV 2022 argue models can be inconsistent, unreliable, anduntrustworthy, particularly in sensitive applications. It can be difficult to understand their inner workings, so explainability techniques are now being used to open up the black box of a machine learning model and explore what lies beneath. However, this paper shows on multiple datasets and models that explainability methods themselves have major disadvantages and are inconsistent with each other. It demonstrates that a very minor modification of the models, such as changing dropout, adding noise, or shuffling the data, can produce different explanations. “Current explainability methods probably aren’t suitable to be applied in the real Matthew Watson is a third-year PhD student under the supervision of Assistant Professor Noura Al Moubayed at Durham University. Bashar Awwad Shiekh Hasan is a visiting researcher at Durham and a speech scientist at Amazon Alexa. Their paper highlighting the problems with current explainability techniques for deep learning models, for which Matthew was first author, won the Best Paper prize at WACV 2022 last month in Hawaii. All three are here to tell us more about their award-winning work. Deep learning methods have enjoyed much success in recent years, but some AGREE TO DISAGREE: WHEN DEEP LEARNING MODELS WITH IDENTICAL ARCHITECTURES PRODUCE DISTINCT EXPLANATIONS B E S T A W A R D PAPER
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=