CVPR Daily - Wednesday

Xiaohui Zeng, originally from China, is a first-year graduate student at the University of Toronto. Her advisors are Raquel Urtasun and Sanja Fidler . She speaks to us ahead of her oral and poster today. Xiaohui tells us the work considers adversarial examples in the physical space. She says that people usually attack the model by perturbing the image pixels, but those kinds of adversarial attacks may not be physically authentic. This work goes beyond the image space to see if some small change of the physical parameters can also lead to the failure of the model. One of the challenging things about this is that it’s very difficult to do the physical perturbations in the real world, convert it into a real image, then find out whether the attack was a success or not. They actually do it in simulators rather than real settings. Xiaohui explains the computer vision techniques involved: “ In terms of realising the whole pipeline, we first used rendering which helped us to convert the 3D scenes into a 2D image. In terms of the technique to attack the models, the algorithm we use is not really fancy, it’s just some basic attack algorithms which are based on the gradient. We actually used the Fast Gradient Sign Method, which is following the sign of the gradient to find perturbations of the physical parameters. ” Adversarial Attacks Beyond the Image Space 4 DAILY CVPR Wednesday Presentation “ … it’s very difficult to do the physical perturbations in the real world, convert it into a real image, then find out whether the attack was a success or not!”

RkJQdWJsaXNoZXIy NTc3NzU=