🤖 AI Summary
Existing linear causal representation learning methods rely either on single-node intervention data or strong distributional assumptions—such as independence between latent variables and noise—limiting their practical applicability.
Method: This paper proposes a weak-assumption causal representation learning algorithm for linear structural causal models (SCMs), requiring only environmental heterogeneity (i.e., variation in causal mechanisms or noise distributions across environments). It models the mapping from latent variables to observations via a linear mixing model, and integrates topological sorting, pruning, and disentanglement strategies to recover the equivalence class of causal features.
Contribution/Results: The method obviates the need for intervention data and substantially relaxes distributional constraints. Synthetic experiments demonstrate superior finite-sample performance over state-of-the-art baselines. Furthermore, it is successfully applied to causal interpretability analysis of internal representations in large language models, empirically validating the feasibility of causality-driven AI interpretability.
📝 Abstract
Causal representation learning (CRL) has garnered increasing interests from the causal inference and artificial intelligence community, due to its capability of disentangling potentially complex data-generating mechanism into causally interpretable latent features, by leveraging the heterogeneity of modern datasets. In this paper, we further contribute to the CRL literature, by focusing on the stylized linear structural causal model over the latent features and assuming a linear mixing function that maps latent features to the observed data or measurements. Existing linear CRL methods often rely on stringent assumptions, such as accessibility to single-node interventional data or restrictive distributional constraints on latent features and exogenous measurement noise. However, these prerequisites can be challenging to satisfy in certain scenarios. In this work, we propose a novel linear CRL algorithm that, unlike most existing linear CRL methods, operates under weaker assumptions about environment heterogeneity and data-generating distributions while still recovering latent causal features up to an equivalence class. We further validate our new algorithm via synthetic experiments and an interpretability analysis of large language models (LLMs), demonstrating both its superiority over competing methods in finite samples and its potential in integrating causality into AI.