VCE: Safe Autoregressive Image Generation via Visual Contrast Exploitation

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autoregressive image generation models lack effective NSFW content mitigation mechanisms—most prior work targets diffusion-based models. To address this gap, we propose Visual Contrastive Enhancement (VCE), a novel framework that constructs fine-grained contrastive image pairs and applies Direct Preference Optimization (DPO) to decouple visual features and perform contrastive learning. VCE enables token-level controllable erasure of unsafe concepts—including artistic styles, explicit content, and objects—within autoregressive generation, without architectural modifications or degradation in generation quality. It effectively suppresses unsafe content while preserving semantic fidelity of safe concepts. Experiments demonstrate state-of-the-art performance across artist-style transfer, explicit-content filtering, and object removal tasks, establishing the first dedicated safety alignment solution for autoregressive image generation and filling a critical technical void in this domain.

Technology Category

Application Category

📝 Abstract
Recently, autoregressive image generation models have wowed audiences with their remarkable capability in creating surprisingly realistic images. Models such as GPT-4o and LlamaGen can not only produce images that faithfully mimic renowned artistic styles like Ghibli, Van Gogh, or Picasso, but also potentially generate Not-Safe-For-Work (NSFW) content, raising significant concerns regarding copyright infringement and ethical use. Despite these concerns, methods to safeguard autoregressive text-to-image models remain underexplored. Previous concept erasure methods, primarily designed for diffusion models that operate in denoising latent space, are not directly applicable to autoregressive models that generate images token by token. To address this critical gap, we propose Visual Contrast Exploitation (VCE), a novel framework comprising: (1) an innovative contrastive image pair construction paradigm that precisely decouples unsafe concepts from their associated content semantics, and (2) a sophisticated DPO-based training approach that enhances the model's ability to identify and leverage visual contrastive features from image pairs, enabling precise concept erasure. Our comprehensive experiments across three challenging tasks-artist style erasure, explicit content erasure, and object removal-demonstrate that our method effectively secures the model, achieving state-of-the-art results while erasing unsafe concepts and maintaining the integrity of unrelated safe concepts. The code and models are available at https://github.com/Maplebb/VCE.
Problem

Research questions and friction points this paper is trying to address.

Safeguarding autoregressive image models from unsafe content generation
Addressing concept erasure limitations in token-by-token image generation
Preventing NSFW content while maintaining unrelated concept integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive image pair construction for concept decoupling
DPO-based training to leverage visual contrastive features
Token-level autoregressive model safety via precise concept erasure
🔎 Similar Papers
No similar papers found.