Computer Vision News - March 2022
15 Karen Tatarian pandemic has shown us, it is sometimes extremely challenging. To answer the latter, I used all rich data- sets collected in previous work to build a simulation set-up and environment for HRI to train RL models for such use cases on. Moreover, to address the adaptation problem using RL, I looked into the multi-modal social signals of the human to formulate the reward signal. It was then used to adapt the robot’s multi- modal behavior creating various possible combinations, which are made of gaze, gestures, proxemics, and emotional expressions, with the goal to increase the robot’s social intelligence and influence . This rewardfunctionwasdesignedtoreflect the complexity and dynamics that take place during HRI. The results allowed us to further investigate what combinations of modalities making up the robot’s behavior would theagents choose. These findings are crucial to unlock the advancement of the social intelligence of future technologies in adapting to humans, learning from them, and communicating with them beyond just verbal means . allows the robot to influence how close the human stands, how they address the robot, whether they take its suggestions or not, and how they greet and end the interaction by mirroring nonverbal behaviors of the robot [2] . Adaptation and Personalization for Human-Robot Interaction (HRI): Human users expect the agents, robots, and technologies they are interacting with to also adapt to them as this in turn improves their usability. In HCI, an adaptive system does not necessarily have the agent learn new behaviors, but rather decide what to adapt, by for instance combining different behaviors, and when or when not to make these modifications. RL may provide a possible solution for achieving adaptation in HRI as a way for the robot to evaluate its behavior. However, as I have mentioned human behaviors and social signals are complex, dynamic, and continuous in nature. Trying to discretize them would lead to large state-spaces and loss of information. In addition, training RL models for HRI is costly and as the COVID-19 Figure 2: Sample of time-line including speech, gaze mechanisms (turn-taking, floor-holding, turn-yielding), and social gestures (deictic gestures: “You” vs “Me” if mentioned in speech, beat gesture: emphasizing the two choices user needs to select from)
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=