Traditional methods in human motion generation have focused mainly on isolated, short-duration motions, often guided by text, music, or scenes. While innovative, these techniques fall short when producing long, continuous sequences, seamlessly transitioning from one motion to the next. This paper takes a fully learningbased approach to generate motions according to multiple actions in a sequence. It proposes FlowMDM, the first diffusion-based model that generates seamless human motion compositions guided by textual descriptions without postprocessing or extra denoising steps. “We’re really proud of this paper,” German asserts enthusiastically. “It addresses a very important problem in human motion generation that we German Barquero (left) is a third-year PhD student working on human motion understanding and generation under the supervision of Sergio Escalera and Cristina Palmero (right). Cristina is a freelancer in computer vision and machine learning applied to human behavior, understanding, and synthesis. She is collaborating with the University of Barcelona on a project exploring an important problem in human motion generation. German and Cristina speak to us before their poster session this morning. Seamless Human Motion Composition with Blended Positional Encodings 18 DAILY CVPR Wednesday Poster Presentation
RkJQdWJsaXNoZXIy NTc3NzU=