🤖 AI Summary
To address the semantic ambiguity and poor interpretability of discrete codebooks in vector-quantized generative models (VQGMs)—particularly the unclear mapping between high-level concepts and discrete tokens—this paper proposes CORTEX, a novel framework featuring dual-path attribution: sample-level and codebook-level. It enables fine-grained concept–token alignment via a two-scale interpretability paradigm that jointly captures local saliency and global relevance, integrating gradient-based attribution with exhaustive codebook traversal, complemented by concept activation mapping and token saliency ranking. CORTEX further supports editability validation and shortcut feature diagnosis. Compatible with mainstream VQGMs—including VQ-VAE and VQ-GAN—it consistently outperforms baselines across multiple models, robustly identifies cross-sample semantic token clusters, and enables precise concept-level image editing and generation bias detection.
📝 Abstract
Vector-Quantized Generative Models (VQGMs) have emerged as powerful tools for image generation. However, the key component of VQGMs -- the codebook of discrete tokens -- is still not well understood, e.g., which tokens are critical to generate an image of a certain concept? This paper introduces Concept-Oriented Token Explanation (CORTEX), a novel approach for interpreting VQGMs by identifying concept-specific token combinations. Our framework employs two methods: (1) a sample-level explanation method that analyzes token importance scores in individual images, and (2) a codebook-level explanation method that explores the entire codebook to find globally relevant tokens. Experimental results demonstrate CORTEX's efficacy in providing clear explanations of token usage in the generative process, outperforming baselines across multiple pretrained VQGMs. Besides enhancing VQGMs transparency, CORTEX is useful in applications such as targeted image editing and shortcut feature detection. Our code is available at https://github.com/YangTianze009/CORTEX.