Computer Vision News - May 2018
The following defense methods are also supported: Feature squeezing ( Xu et al., 2017 ) Spatial smoothing ( Xu et al., 2017 ) Label smoothing ( Warde-Farley and Goodfellow, 2016 ) Adversarial training ( Szegedy et al., 2013 ) Virtual adversarial training ( Miyato et al., 2017 ) Finally, let’s look at a short demonstration of using the code of this software package, to see how simple, elegant and effective it can be: Lines 1-10: loading all the relevant software packages, from TensorFlow and Keras. Lines 11-13: opening the session. Lines 14-18: loading the CIFAR-10 dataset. Lines 19-25: training the model -- of course you could also upload an already pre-trained model instead. Lines 26-33: Demonstrate deploying a deepFool attack against the network -- note that the library makes this very simple and easy to use. Lines 34-end: Demonstrate evaluation of the network following the attack and re-training with this new data taken into account. 1. from os.path import abspath 2. import sys 3. sys.path.append(abspath('.')) 4. from config import config_dict 5. from numpy import append 6. import tensorflow as tf 7. import keras.backend as k 8. from art.attacks.deepfool import DeepFool 9. from art.classifiers.cnn import CNN 10. from art.utils import load_dataset 11. # Get session 12. session = tf.Session() 13. k.set_session(session) 12 Tool Computer Vision News Adversarial Attacks on Deep Learning
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=