Computer Vision News - August 2022
55 Marwa Mahmoud in a natural way with humans. These machines are everywhere, right? They can be in a watch. They can be in Alexa or any of these voice assistants. In the future, there will be more of these systems embedded in our world. The idea is to build models that can help in understanding human behavior and make interpretations automatically in the same way that we humans might make these interpretations. I mentioned animals as well because I've also recently been applying and devising vision techniques and computer vision for animals, automatically analyzing the movements and facial expressions of animals. This also can tell us a lot about their emotions. But, also for early diagnosis of diseases that can be painful and in general for animal welfare. It’s all about using computer vision on these applications. Shouldn’t we be afraid of this? You’re saying that robots will learn how to understand poses and expressions like humans. Your robot will make more judgments than a regular human! Not really, basically there is some kind of a gold standard. What are we comparing with? On what basis? How do we know whether we can understand the person in front of us properly or not? The robot Marwa, you’re not Scottish and you’re not English. No, I'm originally from Egypt. I moved to Cambridge in 2010. I came here to do my PhD. Since then I've been in the UK. Can you tell us about yourwork inGlasgow and Cambridge? My main research area is in computer vision and machine learning for human behavior understanding and animal behavior understanding. I'm interested in multimodal signal processing, and I focus on applications in affective computing and social signal processing. Do you want to give us one example? I'm interested in applications using computer vision. For example, video analysis by looking at and analyzing human facial expressions, gestureanalysis, all these nonverbal signals. In our daily interactions, wedecode lotsof thesesignalswhenwe talk together. But machines are still not great at that. The idea is to build machines that can have social intelligence, that can make interpretations from facial expressions, body motion, tone of voice, this kind of multimodal signal, and context. These are useful for applications, for example, in robotics and virtual assistants to interact
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=