Computer Vision News - May 2018
One more example from real life? Imagine traffic stop signs automatically identified by your autonomous car’s computer vision. Someone with malicious intent can cover one or more stop signs with seemingly (to the human eye) identical ones, engineered by adversarial attack in such a way that the car no longer identifies them as such --- leading the car to ignore the stop sign and enter the intersection obliviously at full speed, putting the lives of car occupants and passers-by at real risk. Another, more imminent example, is the Face ID security mechanism and other biometrical mechanisms, which are at risk of being circumvented by adversarial attacks. In this way, by using adversarial attacks, illicit and prohibited content can evade screening mechanisms based on face-identification via neural networks. This vulnerability means that any systems incorporating deep learning models are in fact at high security risk. We can conceive adversarial attacks as optical illusions aimed at neural networks. Just like optical illusions fool the human brain, adversarial attacks fool neural networks. In this “Focus on” article we will explain and demonstrate how such an image is produced, in other words how an adversarial attack against DNN networks is perpetrated. Let’s dive in to see how this works -- Code 1 below is pseudo code outlining neural network training in the most general way. First, the dataset is loaded. The first line inside the loop samples a subset of the images in the dataset (minibatch). The second line propagates the data through the network and receives the weights output as the data propagates along the network as well as the final result output. The third line, using backpropagation, calculates the gradients along the network; These gradients are then used in the fourth line to update the weights. Finally, the line outside the loop saves the weights for future use if needed. Adversarial Attacks on Deep Learning 7 Tool Computer Vision News Code 1 Pseudo code of training neural network dataset = load_dataset() Loop num_of_steps: data_batch = dataset.sample_data_batch() Loss , Wh, h = forward(data_batch) dw = network.backward(Wh , dLoss) w += learning_rate * dw network.save_weights()
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=