Computer Vision News - October 2020
Research 8 Results “Can adversarial attacks on medical images be crafted as easily as attacks on natural images? If not, why?” The figure below shows the results obtained on 2-class datasets, where we see a drop in model accuracy when adversarial perturbation increases. The authors found that strong attacks including BIM, PGDand CW, only require a small maximumperturbation ε < 1.0/255 to generally succeed. This is overall similar to what happens for natural images, but when compared to other studies, it results much easier to attack medical images than natural ones ( CIFAR-10 and ImageNet datasets often require a maximum perturbation of > 8.0/255 for targeted attacks to generally succeed). Figure 5: plot of classification accuracy over increasing perturbation size ε Figure 4: Medical datasets used to perform experiments on adversarial attacks and detection
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=