🤖 AI Summary
This work addresses the challenge of accurately modeling and estimating causal effects among high-dimensional variables in transfer learning scenarios with limited samples. To this end, the authors propose the Structural Causal Bottleneck Model (SCBM), which integrates causal modeling with the information bottleneck principle by assuming that causal effects are transmitted exclusively through a low-dimensional summary—i.e., a bottleneck—of the cause variable. This approach enables task-oriented causal dimensionality reduction and differs from existing causal representation learning methods by offering theoretical identifiability and supporting efficient causal effect estimation under few-shot conditions. Theoretical analysis establishes sufficient conditions for identifiability, and empirical experiments demonstrate that the learned bottleneck significantly improves the accuracy of causal effect estimation in low-data transfer tasks.
📝 Abstract
We introduce structural causal bottleneck models (SCBMs), a novel class of structural causal models. At the core of SCBMs lies the assumption that causal effects between high-dimensional variables only depend on low-dimensional summary statistics, or bottlenecks, of the causes. SCBMs provide a flexible framework for task-specific dimension reduction while being estimable via standard, simple learning algorithms in practice. We analyse identifiability in SCBMs, connect them to information bottlenecks in the sense of Tishby&Zaslavsky (2015), and illustrate how to estimate them experimentally. We also demonstrate the benefit of bottlenecks for effect estimation in low-sample transfer learning settings. We argue that SCBMs provide an alternative to existing causal dimension reduction frameworks like causal representation learning or causal abstraction learning.