๐ค AI Summary
Multimodal generative models trained on large, uncurated datasets often produce harmful images, and existing model editing techniques frequently distort the latent manifold, causing semantic degradation. To address this, we propose an **editing-free safe image generation paradigm**: leveraging safety-aligned text embeddings to guide diffusion sampling via **dual latent-space reconstruction**โjointly optimizing noise prediction and context-aware reconstruction lossโwhile employing a tunable-weight latent fusion mechanism that preserves semantic fidelity without modifying model parameters. Our method achieves state-of-the-art performance on established safe image generation benchmarks; enables fine-grained control over safety strength; and, for the first time, explicitly models and reveals the inherent trade-off between safety enhancement and over-censorship.
๐ Abstract
Training multimodal generative models on large, uncurated datasets can result in users being exposed to harmful, unsafe and controversial or culturally-inappropriate outputs. While model editing has been proposed to remove or filter undesirable concepts in embedding and latent spaces, it can inadvertently damage learned manifolds, distorting concepts in close semantic proximity. We identify limitations in current model editing techniques, showing that even benign, proximal concepts may become misaligned. To address the need for safe content generation, we leverage safe embeddings and a modified diffusion process with tunable weighted summation in the latent space to generate safer images. Our method preserves global context without compromising the structural integrity of the learned manifolds. We achieve state-of-the-art results on safe image generation benchmarks and offer intuitive control over the level of model safety. We identify trade-offs between safety and censorship, which presents a necessary perspective in the development of ethical AI models. We will release our code. Keywords: Text-to-Image Models, Generative AI, Safety, Reliability, Model Editing