Computer Vision News - March 2023
9 Antonio Emanuele Cinà four more aspects of poisoning attacks, namely: (i) their scalability issues, (ii) the factors affecting the vulnerability of ML models, and (iii) their effectiveness when limiting the attacker's knowledge. He also paved the way toward a novel kind of security violation caused by poisoning attacks, i.e., energy-latency attacks. The resulting attack, i.e., sponge poisoning , increases energy consumption and latency of the victim model at inference time [Cina2022SP] . Finally, Antonio identified the relevant open challenges limiting the advancement of the poisoning literature, i.e., missing scalable and effective attacks and lack of attacks for realistic threat models, together with reasonable research directions that can tackle them [Cina2022SV] . In conclusion, Antonio's contribution clarifies what threats an ML system may encounter when malicious users influence part of the training pipeline and establishes guidelines for developing more reliable models against this threat. Figure 1. Example of a poisoning attack on the animal classifier. On the left, a clean model that behaves as expected, i.e., it correctly classifies the input image. On the right, the poisoned ML model misclassifies the pristine input as desired by the attacker. The red decision boundary indicates how the model has changed after being targeted by poisoning.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=