Computer Vision News - August 2021
using a grid structure where we smoothed short deviations of individual patches and learned longer and more meaningful spatio-temporal motion trajectories. Finally, we modeled temporal motion transitions using Bayesian probability, by inferring the future trajectory step given the current motion information. The model was tested in both indoor and outdoor public spaces for short-term as well as long-term path prediction. Furthermore, the presented approach was also suitable for abnormality detection in private homes to support independent living in the senior population. Automatic human behavior understanding for Affective Computing. Human behaviors can be verbal, where words expressed by the voice represent the communication channel, and nonverbal, where the body language is the main communication channel. Psychological studies showed that the nonverbal channels are much more unconscious than the verbal channel and, therefore, nonverbal signals can be more valuable in revealing the true internal state and the true personality. Hence, during my PhD, with my promoters and colleagues, we focused on nonverbal behavioral cues such as body posture and expressive movements and their relation with personality attributes. In this context, we presented a novel CNN-based framework for personality recognition (Figure 2). Our model analyzed the scene at multiple levels of granularity: firstly, we encoded spatio-temporal descriptors for each individual in the scene. Secondly, we extracted spatio-temporal descriptors from social groups, and thirdly, we encoded the global proxemics (interpersonal distances) between every individual in the scene. Experimental results demonstrated that modeling simultaneously Person-Context information significantly improves the individual features on personality recognition tasks. Finally, the Analysis of Person-Context interactions provides useful information for several real- world applications such as social role detection (i.e., leadership) and social event understanding and prediction. Figure 2. Individual motion descriptors as well as two Context descriptors are learned in a novel CNN framework for personality recognition. Individual motion descriptors (red color) indicate the engagement level of every person in the scene, social group descriptors (green color) indicate the engagement level of individuals in conversational groups, andfinally, the context proxemicsdescriptors (purple color) indicate the global attitude of each individual with respect to the others. 39 Dario Dotti
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=