Beyond the Black Box: Identifiable Interpretation and Control in Generative Models via Causal Minimality

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep generative models—such as diffusion models and Transformers—are often opaque black boxes, lacking interpretability and controllability. Method: We propose the first theoretical framework grounded in the principle of causal minimality to establish a principled foundation for generative model interpretability. Our approach constructs a hierarchical selection model and rigorously proves, under sparsity/compression constraints, that latent variables are causally identifiable and recover the true generative mechanism. Integrating causal inference, sparse coding, and interpretability-aware training, we achieve the first causal identification and decomposition of latent representations in mainstream vision and language generative models. Results: The extracted concept graphs enable fine-grained semantic manipulation. Extensive experiments across multiple benchmarks validate both the causal identifiability of latent variables and their cross-task generalizability for controllable generation.

Technology Category

Application Category

📝 Abstract
Deep generative models, while revolutionizing fields like image and text generation, largely operate as opaque black boxes, hindering human understanding, control, and alignment. While methods like sparse autoencoders (SAEs) show remarkable empirical success, they often lack theoretical guarantees, risking subjective insights. Our primary objective is to establish a principled foundation for interpretable generative models. We demonstrate that the principle of causal minimality -- favoring the simplest causal explanation -- can endow the latent representations of diffusion vision and autoregressive language models with clear causal interpretation and robust, component-wise identifiable control. We introduce a novel theoretical framework for hierarchical selection models, where higher-level concepts emerge from the constrained composition of lower-level variables, better capturing the complex dependencies in data generation. Under theoretically derived minimality conditions (manifesting as sparsity or compression constraints), we show that learned representations can be equivalent to the true latent variables of the data-generating process. Empirically, applying these constraints to leading generative models allows us to extract their innate hierarchical concept graphs, offering fresh insights into their internal knowledge organization. Furthermore, these causally grounded concepts serve as levers for fine-grained model steering, paving the way for transparent, reliable systems.
Problem

Research questions and friction points this paper is trying to address.

Establishes a principled foundation for interpretable generative models.
Uses causal minimality for clear interpretation and control in models.
Extracts hierarchical concept graphs for transparent and reliable systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Applying causal minimality for interpretable latent representations
Introducing hierarchical selection models for complex data dependencies
Enabling component-wise identifiable control via theoretical constraints
Lingjing Kong
Lingjing Kong
Carnegie Mellon University
Machine Learning
Shaoan Xie
Shaoan Xie
Carnegie Mellon University
Representation LearningGenerative ModelCausality
G
Guangyi Chen
Carnegie Mellon University, Pittsburgh, PA, USA; Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE
Yuewen Sun
Yuewen Sun
Mohamed bin Zayed University of Artificial Intelligence
CausalityReinforcement learningRepresentation Learning
Xiangchen Song
Xiangchen Song
Carnegie Mellon University
Machine LearningCausalityData Mining
E
Eric P. Xing
Carnegie Mellon University, Pittsburgh, PA, USA; Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE
K
Kun Zhang
Carnegie Mellon University, Pittsburgh, PA, USA; Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE