Linear Causal Representation Learning by Topological Ordering, Pruning, and Disentanglement

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing linear causal representation learning methods rely either on single-node intervention data or strong distributional assumptions—such as independence between latent variables and noise—limiting their practical applicability. Method: This paper proposes a weak-assumption causal representation learning algorithm for linear structural causal models (SCMs), requiring only environmental heterogeneity (i.e., variation in causal mechanisms or noise distributions across environments). It models the mapping from latent variables to observations via a linear mixing model, and integrates topological sorting, pruning, and disentanglement strategies to recover the equivalence class of causal features. Contribution/Results: The method obviates the need for intervention data and substantially relaxes distributional constraints. Synthetic experiments demonstrate superior finite-sample performance over state-of-the-art baselines. Furthermore, it is successfully applied to causal interpretability analysis of internal representations in large language models, empirically validating the feasibility of causality-driven AI interpretability.

Technology Category

Application Category

📝 Abstract
Causal representation learning (CRL) has garnered increasing interests from the causal inference and artificial intelligence community, due to its capability of disentangling potentially complex data-generating mechanism into causally interpretable latent features, by leveraging the heterogeneity of modern datasets. In this paper, we further contribute to the CRL literature, by focusing on the stylized linear structural causal model over the latent features and assuming a linear mixing function that maps latent features to the observed data or measurements. Existing linear CRL methods often rely on stringent assumptions, such as accessibility to single-node interventional data or restrictive distributional constraints on latent features and exogenous measurement noise. However, these prerequisites can be challenging to satisfy in certain scenarios. In this work, we propose a novel linear CRL algorithm that, unlike most existing linear CRL methods, operates under weaker assumptions about environment heterogeneity and data-generating distributions while still recovering latent causal features up to an equivalence class. We further validate our new algorithm via synthetic experiments and an interpretability analysis of large language models (LLMs), demonstrating both its superiority over competing methods in finite samples and its potential in integrating causality into AI.
Problem

Research questions and friction points this paper is trying to address.

Learning causal latent features from observed linear mixtures
Relaxing stringent assumptions in existing causal representation methods
Validating algorithm via synthetic tests and large language model analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Topological ordering of latent causal features
Pruning to refine causal structure
Disentanglement under weaker distributional assumptions
🔎 Similar Papers
No similar papers found.
H
Hao Chen
School of Mathematical Sciences, Shanghai Jiao Tong University
L
Lin Liu
Institute of Natural Sciences, MOE-LSC, School of Mathematical Sciences, CMA-Shanghai, SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University
Yu Guang Wang
Yu Guang Wang
Associate Professor, Shanghai Jiao Tong University, University of New South Wales
Harmonic AnalysisGraph Neural NetworksComputational MathematicsAI for ScienceLLM