ICAS: Detecting Training Data from Autoregressive Image Generative Models

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses data privacy and copyright risks inherent in autoregressive image generation models by introducing membership inference—previously unexplored in this domain—to detect whether a specific image was used during model training. Methodologically, we propose an implicit token-level classification scoring mechanism coupled with an adaptive low-score token weighting and aggregation strategy, adapting insights from large language model membership detection to the visual autoregressive modeling framework. Key findings reveal that models adhering to the linear scaling law exhibit heightened training data leakage, and scale-aware architectures demonstrate greater susceptibility to membership information disclosure. Experiments demonstrate that our approach significantly outperforms existing baselines on both class-conditional and text-to-image generation tasks, exhibiting strong robustness and cross-scenario generalization. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Autoregressive image generation has witnessed rapid advancements, with prominent models such as scale-wise visual auto-regression pushing the boundaries of visual synthesis. However, these developments also raise significant concerns regarding data privacy and copyright. In response, training data detection has emerged as a critical task for identifying unauthorized data usage in model training. To better understand the vulnerability of autoregressive image generative models to such detection, we conduct the first study applying membership inference to this domain. Our approach comprises two key components: implicit classification and an adaptive score aggregation strategy. First, we compute the implicit token-wise classification score within the query image. Then we propose an adaptive score aggregation strategy to acquire a final score, which places greater emphasis on the tokens with lower scores. A higher final score indicates that the sample is more likely to be involved in the training set. To validate the effectiveness of our method, we adapt existing detection algorithms originally designed for LLMs to visual autoregressive models. Extensive experiments demonstrate the superiority of our method in both class-conditional and text-to-image scenarios. Moreover, our approach exhibits strong robustness and generalization under various data transformations. Furthermore, sufficient experiments suggest two novel key findings: (1) A linear scaling law on membership inference, exposing the vulnerability of large foundation models. (2) Training data from scale-wise visual autoregressive models is easier to detect than other autoregressive paradigms.Our code is available at https://github.com/Chrisqcwx/ImageAR-MIA.
Problem

Research questions and friction points this paper is trying to address.

Detect unauthorized training data in autoregressive image models
Assess vulnerability of models to membership inference attacks
Improve detection robustness across data transformations and scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit token-wise classification score computation
Adaptive score aggregation strategy
Membership inference for autoregressive models
Hongyao Yu
Hongyao Yu
Tsinghua University
machine learningcomputer visionAI security
Yixiang Qiu
Yixiang Qiu
Tsinghua Shenzhen International Graduate School
Trusuworthy AIComputer VisionDeep Learning
Y
Yiheng Yang
Harbin Institute of Technology, Shenzhen
H
Hao Fang
Tsinghua Shenzhen International Graduate School, Tsinghua University
T
Tianqu Zhuang
Tsinghua Shenzhen International Graduate School, Tsinghua University
J
Jiaxin Hong
Harbin Institute of Technology, Shenzhen
B
Bin Chen
Harbin Institute of Technology, Shenzhen
H
Hao Wu
Tsinghua Shenzhen International Graduate School, Tsinghua University; Shenzhen ShenNong Information Technology Co., Ltd.
Shu-Tao Xia
Shu-Tao Xia
SIGS, Tsinghua University
coding and information theorymachine learningcomputer visionAI security