Computer Vision News - May 2018
proposed for deploying and challenging real-world practical DNN system defenses against the existing state of the art, as well as a means for researchers to propose and develop novel defense techniques. It also offers a modular interface for constructing comprehensive defense systems. The Adversarial Robustness Toolbox is designed to support researchers and developers in creating novel defense techniques, as well as in deploying practical defenses of real-world DNN systems. Researchers can use the Adversarial Robustness Toolbox to benchmark novel defenses against the state-of-the-art. For developers, the library provides interfaces which support the composition of comprehensive defense systems using individual methods as building blocks. • Measuring model robustness -- the model measures the loss of accuracy on adversarially altered inputs (as will be demonstrated below). • Model hardening -- making a DNN network more robust against adversarial inputs. Common approaches are preprocessing DNN inputs, and/or including adversarial examples in the training data to train the DNN to avoid their pitfalls. • Runtime detection -- real-time methods to flag inputs suspected of having been adversarially tampered with, typically detecting unusual activation patterns in the DNN’s internal state, corresponding to adversarial inputs. Supported attack and defense methods: The Adversarial Robustness Toolbox contains implementations of the following attacks: Deep Fool ( Moosavi-Dezfooli et al., 2015 ) Fast Gradient Method ( Goodfellow et al., 2014 ) Jacobian Saliency Map ( Papernot et al., 2016 ) Universal Perturbation ( Moosavi-Dezfooli et al., 2016 ) Virtual Adversarial Method ( Moosavi-Dezfooli et al., 2015 ) C&W Attack ( Carlini and Wagner, 2016 ) NewtonFool ( Jang et al., 2017 ) 11 Tool Computer Vision News Adversarial Attacks on Deep Learning
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=