Generalized Bayes for Causal Inference

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first generalized Bayesian framework tailored for causal machine learning, addressing the lack of flexible and robust Bayesian uncertainty quantification for causal effects in existing methods. By placing priors directly on causal estimators and constructing posteriors via Neyman-orthogonal loss functions—bypassing explicit likelihood modeling—the approach ensures compatibility with mainstream meta-learners and supports nonparametric bootstrap-based inference. Notably, it guarantees frequentist calibration and robustness of the posterior even when the nuisance parameters converge at rates slower than the parametric rate. Empirical evaluations demonstrate that the framework delivers accurate and reliable uncertainty quantification across diverse causal inference settings.

Technology Category

Application Category

📝 Abstract
Uncertainty quantification is central to many applications of causal machine learning, yet principled Bayesian inference for causal effects remains challenging. Standard Bayesian approaches typically require specifying a probabilistic model for the data-generating process, including high-dimensional nuisance components such as propensity scores and outcome regressions. Standard posteriors are thus vulnerable to strong modeling choices, including complex prior elicitation. In this paper, we propose a generalized Bayesian framework for causal inference. Our framework avoids explicit likelihood modeling; instead, we place priors directly on the causal estimands and update these using an identification-driven loss function, which yields generalized posteriors for causal effects. As a result, our framework turns existing loss-based causal estimators into estimators with full uncertainty quantification. Our framework is flexible and applicable to a broad range of causal estimands (e.g., ATE, CATE). Further, our framework can be applied on top of state-of-the-art causal machine learning pipelines (e.g., Neyman-orthogonal meta-learners). For Neyman-orthogonal losses, we show that the generalized posteriors converge to their oracle counterparts and remain robust to first-stage nuisance estimation error. With calibration, we thus obtain valid frequentist uncertainty even when nuisance estimators converge at slower-than-parametric rates. Empirically, we demonstrate that our proposed framework offers causal effect estimation with calibrated uncertainty across several causal inference settings. To the best of our knowledge, this is the first flexible framework for constructing generalized Bayesian posteriors for causal machine learning.
Problem

Research questions and friction points this paper is trying to address.

causal inference
Bayesian inference
uncertainty quantification
generalized Bayes
nuisance parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized Bayes
Causal Inference
Uncertainty Quantification
Neyman-orthogonal loss
Loss-based posterior
🔎 Similar Papers
No similar papers found.