🤖 AI Summary
This work addresses the challenge of reliably quantifying epistemic uncertainty in causal effect estimation under unobserved confounding, where existing instrumental variable and proximal causal methods often fall short. The authors propose a deconfounded Gaussian process (DGP) framework that integrates Gaussian processes into causal inference, leveraging posterior mean and variance to deliver accurate point predictions and well-calibrated estimates of epistemic uncertainty, respectively. This approach unifies kernel-based estimation with Bayesian uncertainty quantification and enables systematic model selection via marginal log-likelihood. Empirical evaluations demonstrate that DGP consistently achieves superior predictive performance and more reliable uncertainty estimates across multiple benchmarks, significantly outperforming current methods in terms of empirical coverage frequency and accuracy–rejection curves tailored to decision-aware settings.
📝 Abstract
Instrumental variable (IV) and proximal causal learning (Proxy) methods are central frameworks for causal inference in the presence of unobserved confounding. Despite substantial methodological advances, existing approaches rarely provide reliable epistemic uncertainty (EU) quantification. We address this gap through a Deconditional Gaussian Process (DGP) framework for uncertainty-aware causal learning. Our formulation recovers popular kernel estimators as the posterior mean, ensuring predictive precision, while the posterior variance yields principled and well-calibrated EU. Moreover, the probabilistic structure enables systematic model selection via marginal log-likelihood optimization. Empirical results demonstrate strong predictive performance alongside informative EU quantification, evaluated via empirical coverage frequencies and decision-aware accuracy rejection curves. Together, our approach provides a unified, practical solution for causal inference under unobserved confounding with reliable uncertainty.