Computer Vision News Computer Vision News 20 actors that are trying to break the law. They’re trying to do malicious things. You want to make sure that LLMs are not their tool. There is an active research area trying to address these questions by first understanding how to do the red teaming of these LLMs to understand where the vulnerabilities are. That’s the research we do when it comes to bad social actors trying to take advantage of or exploit LLMs to do bad things. What direction do you think this is going? A lot of the research is, like I said, about understanding the bad things that the LLMs can do, and you want to make sure that you patch those. For example, AI alignment is a very hot topic that I’m also working on right now to understand how to align these LLMs or even generative AI in general with social and human values. The bad social actor is one thing, but there’s also the good social actor problems. When it comes to asking generative AI or LLMs to do things that you need them to do, which are very legitimate requests, you want to make sure these LLMs do not make mistakes or that they actually follow your rules. These LLMs may hallucinate. They may come up with something that doesn’t exist, or they may just give you a wrong answer in a very convincing, persuasive way. All these things are concerns we have right now. My research is addressing those issues. How do you convince the world that your recommendations are right? That’s my job to make sure my research is being seen by the research community. I’m doing a lot of work making sure that I have my voice heard, for example, on Twitter. [she laughs] I have a Twitter account where I try to tell people what my research is about, but also, the research community, in general, is very good at keeping up with literature. If you publish your results, your voice will be heard. I guess your question is more relevant to, let’s say, your research has been accepted by your peers in academia, but what about outside of Women in Science
RkJQdWJsaXNoZXIy NTc3NzU=