Safety Without Semantic Disruptions: Editing-free Safe Image Generation via Context-preserving Dual Latent Reconstruction

๐Ÿ“… 2024-11-21
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multimodal generative models trained on large, uncurated datasets often produce harmful images, and existing model editing techniques frequently distort the latent manifold, causing semantic degradation. To address this, we propose an **editing-free safe image generation paradigm**: leveraging safety-aligned text embeddings to guide diffusion sampling via **dual latent-space reconstruction**โ€”jointly optimizing noise prediction and context-aware reconstruction lossโ€”while employing a tunable-weight latent fusion mechanism that preserves semantic fidelity without modifying model parameters. Our method achieves state-of-the-art performance on established safe image generation benchmarks; enables fine-grained control over safety strength; and, for the first time, explicitly models and reveals the inherent trade-off between safety enhancement and over-censorship.

Technology Category

Application Category

๐Ÿ“ Abstract
Training multimodal generative models on large, uncurated datasets can result in users being exposed to harmful, unsafe and controversial or culturally-inappropriate outputs. While model editing has been proposed to remove or filter undesirable concepts in embedding and latent spaces, it can inadvertently damage learned manifolds, distorting concepts in close semantic proximity. We identify limitations in current model editing techniques, showing that even benign, proximal concepts may become misaligned. To address the need for safe content generation, we leverage safe embeddings and a modified diffusion process with tunable weighted summation in the latent space to generate safer images. Our method preserves global context without compromising the structural integrity of the learned manifolds. We achieve state-of-the-art results on safe image generation benchmarks and offer intuitive control over the level of model safety. We identify trade-offs between safety and censorship, which presents a necessary perspective in the development of ethical AI models. We will release our code. Keywords: Text-to-Image Models, Generative AI, Safety, Reliability, Model Editing
Problem

Research questions and friction points this paper is trying to address.

Generates safe images without semantic disruptions
Preserves global context and structural integrity
Balances safety and censorship in ethical AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Safe embeddings for image generation
Modified diffusion process with tunable weights
Preserves global context and manifold integrity
J
J. Vice
University of Western Australia
N
Naveed Akhtar
University of Melbourne
Richard Hartley
Richard Hartley
Australian National University, National ICT Australia (NICTA)
Computer Visionoptimizationforensic imaging
A
Ajmal Mian
University of Western Australia