Computer Vision News - May 2018

This month we shall focus on examples of Adversarial Attacks on Deep Learning Deep learning is undergoing a renaissance, deep neural networks (DNNs) are the current or up and coming state-of-the-art solution in many fields, providing DNN developers and data scientists with enormous power and capabilities -- particularly in the area of computer vision. However, as we all know, with great power comes great responsibility. There is a growing concern over the power of DNN networks, especially in the wrong hands… The need to detect and prevent attacks against computer systems is nothing new. There are many ways for malicious individuals to take advantage of system vulnerabilities. However, in the case of DNNs, this is especially grave and concerning. The challenge of understanding the vulnerabilities and attacks has become an arms race of attack and defense strategies. To prepare against these threats, DNN researchers and developers need to tackle the hard questions: for each element found in the training and evaluation data sets, do we know where in the network it was handled, and what was its effect on the network state? Do we know how to verify that the variability of the training input is wide enough to ensure the network is robust enough? And so on... In this context, we shall focus on adversarial examples. Adversarial examples are a type of input images, which have undergone the most minimal change, but cause DNN networks to produce a completely wrong output. A famous adversarial example was published by Goodfellow et al in 2014, and appears in the next page. The image of a panda ( left ) is labeled by GoogLeNet as “panda” at the highest confidence category. Goodfellow showed that the addition of a very minor amount of properly engineered noise ( middle image ), results in an image that appears just the same to the human eye ( right ), however, the network labels it as “capuchin”(!). 6 Focus on: Adversarial Attacks Tool by Assaf Spanier “The challenge of understanding the vulnerabilities and attacks has become an arms race of attack and defense strategies.” Computer Vision News Adversarial Examples: Attacks on Deep Learning

RkJQdWJsaXNoZXIy NTc3NzU=