Computer Vision News - March 2023

8 Congrats, Doctor Antonio! Machine Learning (ML) models are nowadays becoming the de facto standard for diverse and sensitive tasks, such as cancer detection, malware detection, and road sign recognition, as vital tools for data analysis and autonomic decision-making. The key strength of learning models is their ability to infer patterns from data that can be used for future predictions. Nevertheless, ML has traditionally been developed under the assumption that the environment is benign during training and usage of the model. These assumptions have helped design efficacious ML models but do not cover cases where malicious users try to alter this condition to reach their goal. Therefore, the increasing pervasiveness of ML in critical applications poses an issue about their robustness in the presence of malicious manipulations. For example, in 2005, Nelson et al. [Nelson08SF] showed that malicious users could evade ML-based spam email filters by appending words indicative of legitimate email. This scenario is an instance of a ML security threat called data poisoning . Under this setting, malicious users may cause failures in ML systems (e.g., spam filters) by tampering with their training data, thereby posing real concerns about their trustworthiness. Fig.1 depicts the influence of a poisoning attack on a ML model decision boundary. Specifically, compared to the pristine model, the poisoned one has a different decision boundary that now meets the attacker's goal. Antonio's work aimed at shedding light on existing types of poisoning attacks, categorizing them with respect to their assumptions and attack methodologies. He then investigated Antonio Emanuele Cinà has recently finished his PhD at Ca' Foscari University of Venice. His research interests include ML security, system reliability, and interpretability. Recently, he has focused on studying ML vulnerabilities during training to categorize these risks and define guidelines for developing secure models. Antonio is now a Postdoctoral Researcher at CISPA – Helmholtz Center for Information Security, Saarbrücken. Congrats, Doctor Antonio! TRAINING WITH MALICIOUS TEACHERS: POISONING ATTACKS AGAINST MACHINE LEARNING

RkJQdWJsaXNoZXIy NTc3NzU=