10 DAILY WACV Sunday Oral Presentation In this paper, Luchao explores innovative ways to make 3D facial generation more personalized, accessible, and scalable. His model, My3DGen, uses generative AI to reconstruct a complete 3D model of a person’s face from a limited number of images. This has several applications, including virtual communication, augmented reality, and content creation. Imagine being on a Zoom call, facing the camera, and AI generating how your face would appear from different perspectives as if captured by multiple cameras. Prior work on 3D human face modeling uses global models that are not always effective for individual users. “You can find a lot of prior work on the 3D human face, but this pre-trained global model is not perfect for each person,” Luchao tells us. “Sometimes you find artifacts. Sometimes, the pretrained model can’t generate my face, or I’ll be concerned about data privacy. Am I going to upload my personal data to the server? I just want to store my personal data myself in my own phone. That’s the motivation for our work. We want to personalize a pre-trained 3D human face for personal use.” One of the biggest challenges in personalizing pre-trained generative models is their size. Large AI models require substantial storage space My3DGen: A Scalable Personalized 3D Generative Model Luchao Qi is a third-year PhD student at the University of North Carolina at Chapel Hill (UNC) under the supervision of Roni Sengupta. His paper on democratizing 3D generative AI for personalized human face modeling has been accepted as an oral. Although he cannot attend WACV in person, we speak to him following his group’s poster yesterday and before their oral later today.
RkJQdWJsaXNoZXIy NTc3NzU=