🤖 AI Summary
This work addresses data privacy and copyright risks inherent in autoregressive image generation models by introducing membership inference—previously unexplored in this domain—to detect whether a specific image was used during model training. Methodologically, we propose an implicit token-level classification scoring mechanism coupled with an adaptive low-score token weighting and aggregation strategy, adapting insights from large language model membership detection to the visual autoregressive modeling framework. Key findings reveal that models adhering to the linear scaling law exhibit heightened training data leakage, and scale-aware architectures demonstrate greater susceptibility to membership information disclosure. Experiments demonstrate that our approach significantly outperforms existing baselines on both class-conditional and text-to-image generation tasks, exhibiting strong robustness and cross-scenario generalization. The implementation is publicly available.
📝 Abstract
Autoregressive image generation has witnessed rapid advancements, with prominent models such as scale-wise visual auto-regression pushing the boundaries of visual synthesis. However, these developments also raise significant concerns regarding data privacy and copyright. In response, training data detection has emerged as a critical task for identifying unauthorized data usage in model training. To better understand the vulnerability of autoregressive image generative models to such detection, we conduct the first study applying membership inference to this domain. Our approach comprises two key components: implicit classification and an adaptive score aggregation strategy. First, we compute the implicit token-wise classification score within the query image. Then we propose an adaptive score aggregation strategy to acquire a final score, which places greater emphasis on the tokens with lower scores. A higher final score indicates that the sample is more likely to be involved in the training set. To validate the effectiveness of our method, we adapt existing detection algorithms originally designed for LLMs to visual autoregressive models. Extensive experiments demonstrate the superiority of our method in both class-conditional and text-to-image scenarios. Moreover, our approach exhibits strong robustness and generalization under various data transformations. Furthermore, sufficient experiments suggest two novel key findings: (1) A linear scaling law on membership inference, exposing the vulnerability of large foundation models. (2) Training data from scale-wise visual autoregressive models is easier to detect than other autoregressive paradigms.Our code is available at https://github.com/Chrisqcwx/ImageAR-MIA.