Computer Vision News - May 2018
found here . The network has 784 input neurons - one for each pixel of the MNIST images (28x28=784). It has a 30-neuron internal layer and a 10-neuron output layer -- one for each digit possibility. The simple network available from the link comes pretrained with the appropriate weights. That is after the process explained in Code 1 . Broadly speaking, we can talk about two types of attacks: The first is known as a Non-Targeted Attack . For this attack , as seen above, Code 2 uses Adversarial(loss) defined as: where 1 2 ∗ 2 2 is the square L2 norm; is the goal label of the image we want to adversarially generate; and ( ) is the output given the random image x. If we load and run this code (from this link -- please note the changes you need to make for Python3 -- found here ) we’ll get the following incredible result: The image on the left is the random image we started with. The image on the right is the image after a network running Code 2 updated it to be “1”. It still looks like a completely random image -- “noise”, to us humans! However, the DNN now labels this “image” as a “1”. 9 Tool Computer Vision News = 1 2 − ( ) 2 2 Input image Adversarial “1” image Adversarial Attacks on Deep Learning
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=