Computer Vision News Computer Vision News 4 and, consequently, better decisionmaking when producing an action. This work not only addresses a significant challenge in robotic learning but also opens up new avenues for research. “We show there is a lot of work that needs to be done on combining the learning of multiple modalities, especially in the realm of vision and touch,” Fotios points out. “Particularly in robotics, they kind of help each other. For example, vision gives you more global information about the world, but when you have something in your hand, or you’re touching something, most information comes from the tactile sense.” The team is currently exploring the temporal aspect of these representations – how they change over time and how each modality’s importance shifts depending on the task. For instance, when a human grasps an object, vision is crucial initially, but tactile feedback becomes more important once the object is in hand. Understanding and modeling this temporal shift in importance could lead to even more sophisticated robotic systems. Winning the Best Student Paper Award at UR2024 is no small feat. What does Fotios believe sets his work apart from the competition? “I think there are two parts to it,” he tells us. “First was the paper itself, the idea and how we implemented it, how important the problem was we were solving, and how novel was the way that we were solving it. Also, how you communicate your work, which is hard for us researchers because we think everybody thinks the way we think, and then we don’t explain why something is important. We did an UR24 Best Student Paper Award
RkJQdWJsaXNoZXIy NTc3NzU=