Computer Vision News - June 2024

19 Computer Vision News Computer Vision News Furong Huang Furong Huang is an Assistant Professor at the University of Maryland, College Park. She was at ICLR 2024 to present two spotlights and eight posters, as well for the mentorship program and to interview with Computer Vision News. She was everywhere! Furong tells us about her career so far and her work on trustworthy AI and machine learning. Thank you for accepting my invitation, Furong. Can you tell us what your work is about? I work on trustworthy AI and ML. Specifically, I want to understand how to align AI with human and social values, the risks and ethical issues in AI deployment, and make sure that AI is always in service of humans in this highly dynamic world that always changes over time. When we identify things that we should or should not do, how do we enforce ethical protocols and ensure that both the good and bad guys follow them? [she laughs] That’s a great question. Actually, my recent research on AI security is to understand the concept called jailbreak in large language models (LLMs). You might have heard that these LLMs, before they go out for deployment, usually have to go through some security safeguards in the sense that you want to make sure they don’t answer illegal or inappropriate questions. Especially those questions that could harm your business. For example, chatbots should not always say yes when they’re asked for a refund if you don’t think these are legitimate questions for a refund. But that’s more from the business level. There’s also the level of security in general in the sense that you shouldn’t allow LLMs to give you very detailed guidance on how to hack into a government database, for example. You shouldn’t allow those kinds of instructions to happen. Of course, your question is how to understand the ethical issues or security issues for AI when there are good and bad social actors. This jailbreak problem is for the bad social

RkJQdWJsaXNoZXIy NTc3NzU=