Computer Vision News - November 2021
2 Summary Congr ts, Doctor! 42 Natasha Jaques recently completed her PhD at MIT, where her thesis received the Outstanding PhD Dissertation Award from the Association for the Advancement of Affective Computing. She now holds a joint position as a Research Scientist at Google Brain and Postdoctoral Fellow at UC Berkeley. She is focused on building socially and emotionally intelligent AI agents that can flexibly learn from multi-agent and human- AI interaction. Her research aims to build the case that social learning can enhance RL agents’ ability to acquire interesting behavior, generalize to new environments, and interact with people. Congrats, Doctor Natasha! Learning from human-AI interaction: Ultimately, for AI to be as satisfying and useful to people as possible, we want to train agents that directly optimize for human preferences. However, manually training a device is cumbersome, and people exhibit low adherence to providing explicit feedback labels. Instead, passive sensing of the user’s emotional state and social cues could allow the agent to learn quickly and at scale, enabling human-in-the-loop training without extra human effort. I have explored this idea in several papers related to learning from human sentiment and social cues in dialog [1,2] including with a novel Offline RL technique [ 3,4] . Experiments deploying these models to interact live with humans reveal that learning from implicit, affective signals is more effective than relying on manual feedback. My work also demonstrated that a recurrent art generationmodel couldbesignificantly improved with less than 70 samples of people's facial expression reactions to its sketches [ 5] (see Figure 1). My goal is to demonstrate that learning from implicit social cues can enhance human-AI interaction, and guide AI systems to take actions aligned with human preferences.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=