Computer Vision News - October 2020
9 These experiments reach the conclusion that it’s easier to craft attacks on medical images than natural ones . According to the authors, this mainly has two reasons: the nature of the former datasets itself - they have significantly larger attention maps, due to richer biological textures - and the architecture of current state-of-the-art networks. These actually result over-parametrized for many of the medical imaging tasks they are employed for. A demonstration of this comes from the visualization of the learnt representations of the ‘res3a relu’ layer (averaged over channels) of the networks, which are rather simple for medical images, compared to the complex shapes learned from natural images. Another factor that leads to this conclusion regards the difference in loss landscapes, which look extremely sharp for medical images (a sign of high vulnerability to adversarial), compared to natural ones. Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems “To what degree are adversarial attacks on medical images detectable?” Figure 6: numerical results of adversarial attacks on different size datasets Similar experiments on multi-class tasks show a greater success rate of the attacks. In the table below, under the same perturbation ε = 0.3/255, we see model accuracy on crafted adversarial examples decreases as the number of classes (CXR-X, with X = 2,3,4) increases. This indicates that medical image datasets that have multiple classes are even more vulnerable to adversarial attacks, and BIM, PGD and CW can succeed more than 99% of the time with small perturbation ε = 1.0/255.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=