🤖 AI Summary
Current text-to-image models suffer from significant limitations in content controllability, safety, and scalability—particularly in robustly suppressing unsafe concepts (e.g., nudity) or performing zero-shot style injection. To address this, we propose the first training-free, interpretable concept modulation framework. It leverages k-sparse autoencoders (k-SAEs) to disentangle and localize unambiguous concepts within the text embedding latent space, enabling bidirectional steering—i.e., suppression or excitation—via Concept Activation Intervention (CAI). Our method requires no fine-tuning, LoRA adaptation, or architectural modification, supporting zero-shot style transfer and adversarially robust intervention. Experiments demonstrate a 20.01% improvement in unsafe content removal rate, preservation of generation quality, inference speed five times faster than state-of-the-art alternatives, and exceptional stability across multi-style transfer and adversarial prompting scenarios.
📝 Abstract
Despite the remarkable progress in text-to-image generative models, they are prone to adversarial attacks and inadvertently generate unsafe, unethical content. Existing approaches often rely on fine-tuning models to remove specific concepts, which is computationally expensive, lack scalability, and/or compromise generation quality. In this work, we propose a novel framework leveraging k-sparse autoencoders (k-SAEs) to enable efficient and interpretable concept manipulation in diffusion models. Specifically, we first identify interpretable monosemantic concepts in the latent space of text embeddings and leverage them to precisely steer the generation away or towards a given concept (e.g., nudity) or to introduce a new concept (e.g., photographic style). Through extensive experiments, we demonstrate that our approach is very simple, requires no retraining of the base model nor LoRA adapters, does not compromise the generation quality, and is robust to adversarial prompt manipulations. Our method yields an improvement of $mathbf{20.01%}$ in unsafe concept removal, is effective in style manipulation, and is $mathbf{sim5}$x faster than current state-of-the-art.