CVPR Daily - Thursday
10 DAILY CVPR Thursday Poster Presentation Diffusion models , known for their powerful text-to-image generation capabilities, have faced scrutiny recently due to concerns over biased and inappropriate behavior . In this paper, Manuel and Patrick explore the broader implications of training large-scale diffusion models on data from the web . While in some ways a reflection of society, the internet has many flaws, housing a substantial amount of inappropriate and unsightly material . Diffusion models tend to replicate this objectionable content, raising concerns about generating offensive imagery, including nudity, hate, and violence . “ The problem is that these models are biased and will implicitly generate this content, ” Patrick tells us. “ Given a prompt stating these concepts, which is maybe implicit, the model will generate it. For example, with a link to womanhood, we observed that the model generates nudity. ” Manuel adds: “ The model picked up on some implicit correlations in the training data, which resulted in unexpected behavior at inference. Using terms like ‘Asian’ or ‘Japanese,’ you were likely to get explicit sexual content in over 80% of all images Safe Latent Diffusion: Mitigating Inappropriate Degeneration in DiffusionModels Manuel Brack (right) is a PhD candidate at the German Research Centre for Artificial Intelligence (DFKI) and is part of the Artificial Intelligence and Machine Learning Lab at TU Darmstadt led by Kristian Kersting. Patrick Schramowski (left), Manuel’s supervisor, is a Senior Researcher in the lab. He finished his PhD in March and works on the topics of ethical AI and moderating and aligning large-scale models. They speak to us ahead of their poster this afternoon.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=