CVPR Daily - Thursday
when compared to the method proposed by the researchers. Empirical evaluations indicate that Safe Latent Diffusion , which involves having a distinct direction and estimate for the concepts to be avoided, disentangling the safety guidance and image generation, significantly outperforms negative prompting . Also, employing negative prompts or modifying the text prompt to include the word ‘dressed’ yields suboptimal results , including excessive alterations in the generated image. The aim is to generate an image closely resembling the original with all inappropriate content removed. Negative prompting often fails to achieve this goal , resulting in a completely different image. Regarding the next steps for this work, Manuel tells us they already have a preprint out , scaling up the evaluation. “ We’re now looking into a more general approach, evaluating the plethora of text-to-image models, ” he reveals. “ First, assessing their inappropriate degeneration . Do they generate this content, and at what scale? Then also looking at their image mitigation strategies. Can we use these instructions to avoid generating this content? We’ve introduced a new benchmark in this paper, which can be used for evaluating inappropriate degeneration in diffusion models , and I think this will be valuable for the community. ” Patrick continues: “It’s actually already being used. For example, some papers from other labs erase these concepts from the model. They’re tuning the model to forget. If readers are working on anything like that, they can use our benchmarking dataset. ” To learn more about Manuel and Patrick’s work, visit Poster 183 this afternoon from 16:30-18:30 in the West Exhibit Hall. 14 DAILY CVPR Thursday Poster Presentation
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=