🤖 AI Summary
Generative models rely on large-scale unlabeled data, raising copyright and safety concerns; existing concept erasure methods often overfit via fine-tuning, degrading model generalization. This paper proposes an end-to-end differentiable, minimalist erasure paradigm: it defines the erasure objective solely via divergence between output distributions—bypassing parameter fine-tuning—and introduces a learnable neuron masking mechanism within the flow-matching framework to precisely suppress undesirable concepts. Crucially, the original model architecture remains unaltered, preserving its full generative capacity. Experiments demonstrate that our method significantly improves erasure robustness and regulatory compliance across multiple state-of-the-art generative models—including diffusion and flow-based architectures—while maintaining high-fidelity image synthesis performance.
📝 Abstract
Recent advances in generative models have demonstrated remarkable capabilities in producing high-quality images, but their reliance on large-scale unlabeled data has raised significant safety and copyright concerns. Efforts to address these issues by erasing unwanted concepts have shown promise. However, many existing erasure methods involve excessive modifications that compromise the overall utility of the model. In this work, we address these issues by formulating a novel minimalist concept erasure objective based emph{only} on the distributional distance of final generation outputs. Building on our formulation, we derive a tractable loss for differentiable optimization that leverages backpropagation through all generation steps in an end-to-end manner. We also conduct extensive analysis to show theoretical connections with other models and methods. To improve the robustness of the erasure, we incorporate neuron masking as an alternative to model fine-tuning. Empirical evaluations on state-of-the-art flow-matching models demonstrate that our method robustly erases concepts without degrading overall model performance, paving the way for safer and more responsible generative models.