When Shuai began exploring talking head generation – the task of synthesizing realistic facial animations from audio or video input – he was struck by two persistent problems. “Existing methods have made considerable progress,” he says, “but the issues of identity leakage and rendering artifacts persist. Therefore, this paper primarily focuses on addressing those.” Shuai Tan is a PhD student at Shanghai Jiao Tong University, under the supervision of Ye Pan. His work, shortlisted for a Best Paper award, proposes a novel approach to generating realistic talking heads, addressing some longstanding visual flaws that affect existing methods. Shuai shares with us his thoughts behind the work and its unexpected origins. FixTalk: Taming Identity Leakage for High-Quality Talking Head Generation in Extreme Cases 8 DAILY ICCV Tuesday Oral & Award Candidate
RkJQdWJsaXNoZXIy NTc3NzU=