Computer Vision News - July‏ 2024

Computer Vision News Computer Vision News 2 CVPR Best Paper Award Winner Generative Image Dynamics Imagine looking at a picture of a beautiful rose and visualizing how it sways in the wind or responds to your touch. This innovative work aims to do just that by automatically animating single images without user annotations. It proposes to solve the problem by modeling what it calls image-space motion priors to generate a video in a highly efficient and consistent manner. “By using these representations, we’re able to simulate the dynamics of the underlying thing, like flowers, trees, clothing, or candles moving in the wind,” Zhengqi tells us. “Then, we can do real-time interactive simulation. You can use your mouse to drag the flower, and it will respond automatically based on the physics of our world.” The applications of this technology are already promising. Currently, it can model small motions, similar to a technique called cinemagraph, where the background is typically static, but the object is moving. A potential application for this would Zhengqi Li is a research scientist at Google DeepMind, working on computer vision, computer graphics, and AI. His paper on Generative Image Dynamics has not only been selected as a highlight paper this year but is also in the running for a best paper award. He is here to tell us more about it before his oral and poster presentations. NOTE: this article was written before the announcement of the award winners. Which explains why it keeps mentioning a candidate and not a winning paper. Once again, we placed our bets on the right horse! Congratulations to Zhengqi and team for the brilliant win! And to the other winning paper too!

RkJQdWJsaXNoZXIy NTc3NzU=