🤖 AI Summary
Estimating causal quantities (CQs) often incurs high data acquisition costs, especially when individual outcome measurements are expensive; existing methods predominantly target the conditional average treatment effect and lack generalizability. This paper proposes ActiveCQ—the first unified active learning framework for causal quantity estimation—formally defining the active estimation problem for generalized CQs. ActiveCQ jointly optimizes distributional and regression models by modeling regression functions via Gaussian processes and coupling conditional mean embeddings in a reproducing kernel Hilbert space. It derives sampling strategies from posterior uncertainty, circumventing explicit density estimation, and employs information gain and total variation reduction as utility functions. Experiments demonstrate that ActiveCQ significantly outperforms baseline methods across diverse CQ estimation tasks, substantially improving sample efficiency on both synthetic and semi-synthetic datasets.
📝 Abstract
Estimating causal quantities (CQs) typically requires large datasets, which can be expensive to obtain, especially when measuring individual outcomes is costly. This challenge highlights the importance of sample-efficient active learning strategies. To address the narrow focus of prior work on the conditional average treatment effect, we formalize the broader task of Actively estimating Causal Quantities (ActiveCQ) and propose a unified framework for this general problem. Built upon the insight that many CQs are integrals of regression functions, our framework models the regression function with a Gaussian Process. For the distribution component, we explore both a baseline using explicit density estimators and a more integrated method using conditional mean embeddings in a reproducing kernel Hilbert space. This latter approach offers key advantages: it bypasses explicit density estimation, operates within the same function space as the GP, and adaptively refines the distributional model after each update. Our framework enables the principled derivation of acquisition strategies from the CQ's posterior uncertainty; we instantiate this principle with two utility functions based on information gain and total variance reduction. A range of simulated and semi-synthetic experiments demonstrate that our principled framework significantly outperforms relevant baselines, achieving substantial gains in sample efficiency across a variety of CQs.