🤖 AI Summary
Deep generative models—such as diffusion models and Transformers—are often opaque black boxes, lacking interpretability and controllability.
Method: We propose the first theoretical framework grounded in the principle of causal minimality to establish a principled foundation for generative model interpretability. Our approach constructs a hierarchical selection model and rigorously proves, under sparsity/compression constraints, that latent variables are causally identifiable and recover the true generative mechanism. Integrating causal inference, sparse coding, and interpretability-aware training, we achieve the first causal identification and decomposition of latent representations in mainstream vision and language generative models.
Results: The extracted concept graphs enable fine-grained semantic manipulation. Extensive experiments across multiple benchmarks validate both the causal identifiability of latent variables and their cross-task generalizability for controllable generation.
📝 Abstract
Deep generative models, while revolutionizing fields like image and text generation, largely operate as opaque black boxes, hindering human understanding, control, and alignment. While methods like sparse autoencoders (SAEs) show remarkable empirical success, they often lack theoretical guarantees, risking subjective insights. Our primary objective is to establish a principled foundation for interpretable generative models. We demonstrate that the principle of causal minimality -- favoring the simplest causal explanation -- can endow the latent representations of diffusion vision and autoregressive language models with clear causal interpretation and robust, component-wise identifiable control. We introduce a novel theoretical framework for hierarchical selection models, where higher-level concepts emerge from the constrained composition of lower-level variables, better capturing the complex dependencies in data generation. Under theoretically derived minimality conditions (manifesting as sparsity or compression constraints), we show that learned representations can be equivalent to the true latent variables of the data-generating process. Empirically, applying these constraints to leading generative models allows us to extract their innate hierarchical concept graphs, offering fresh insights into their internal knowledge organization. Furthermore, these causally grounded concepts serve as levers for fine-grained model steering, paving the way for transparent, reliable systems.