Computer Vision News - February 2022

6 Best Paper Award at WACV 2022 Matthew agrees : “The area of explainability is very popular right now, so being able to expose some of the problems in this area is extremely important. We did that with very thorough experiments and achieved clear results.” Matthew’s interest in the explainability of machine learning grew out of being intrigued by understanding exactly what was going on inside a model. “It’s fascinating to get more of an idea of how these machine learning models work, but why they don’t work is perhaps even more interesting,” he observes . “Once you figure out it doesn’t work in some specific way it gives you more of an idea of how to make it better.” Bashar describes how he came to the field: “Previously, I worked in FinTech and experienced some of these problems in real life. We had very limited data to train on and explainability was very that gets less money!” This work builds on previous work where the team used explainability methods to detect adversarial attacks . They were surprised to find by using explanations they could detect adversarial attacks that look very similar to the genuine data. Digging deeper and studying explanations and how consistent they were, they found they did not just detect adversary attacks, but also detected minor changes in models . With the work taking home the coveted Best Paper award at WACV 2022 , what do they all think convinced the jury? “ We’re targeting a problem people don’t think much about,” B ashar tells us. “It has always been there, but for some reason, the success of deep learning methods made it less obvious for people. We highlighted that problem and with the application, domain and approach we took, we made it really clear and concise.”

RkJQdWJsaXNoZXIy NTc3NzU=