CVPR Daily - Friday

be dynamic backgrounds for virtual meetings, providing a more engaging and visually appealing alternative to static or blurred backgrounds but without excessive motion that could be distracting. “Moving to model larger motion, like human motion or cats and dogs running away, is an interesting future research direction,” Zhengqi points out. “We’re working on that to see if we can use a better and more flexible motion representation to model those generic motions to get better video generation or simulation results.” Most current and prior mainstream approaches in video modeling involve using a deep neural network or diffusion model to directly predict large volumes of pixels representing video frames, which is computationally intensive and expensive. In contrast, this work predicts underlying motion, which lies on a lower-dimensional manifold, and uses a small number of bases to represent a very long motion trajectory. “You can use a very small number of coefficients to represent very long videos,” Zhengqi explains. “This allows us to use this 5 DAILY CVPR Friday Generative Image Dynamics

RkJQdWJsaXNoZXIy NTc3NzU=