CausalPFN: Amortized Causal Effect Estimation via In-Context Learning

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Causal effect estimation typically relies on manual model selection, incurring high computational costs and poor generalizability. To address this, we propose the first tuning-free, plug-and-play causal inference framework that deeply integrates the Prior-Fitting Network (PFN) paradigm with Bayesian causal inference within a unified large Transformer architecture. This model directly performs context learning from observational data to estimate heterogeneous and average treatment effects—without requiring model selection, fine-tuning, or domain-specific adaptation—and simultaneously delivers calibrated uncertainty quantification. Trained on large-scale synthetic data, it explicitly encodes ignorability constraints and posterior approximation. Our method achieves state-of-the-art average performance across standard benchmarks—including IHDP, Lalonde, and ACIC—and demonstrates strong competitiveness on real-world uplift modeling tasks, significantly lowering the barrier to rigorous causal analysis.

Technology Category

Application Category

📝 Abstract
Causal effect estimation from observational data is fundamental across various applications. However, selecting an appropriate estimator from dozens of specialized methods demands substantial manual effort and domain expertise. We present CausalPFN, a single transformer that amortizes this workflow: trained once on a large library of simulated data-generating processes that satisfy ignorability, it infers causal effects for new observational datasets out-of-the-box. CausalPFN combines ideas from Bayesian causal inference with the large-scale training protocol of prior-fitted networks (PFNs), learning to map raw observations directly to causal effects without any task-specific adjustment. Our approach achieves superior average performance on heterogeneous and average treatment effect estimation benchmarks (IHDP, Lalonde, ACIC). Moreover, it shows competitive performance for real-world policy making on uplift modeling tasks. CausalPFN provides calibrated uncertainty estimates to support reliable decision-making based on Bayesian principles. This ready-to-use model does not require any further training or tuning and takes a step toward automated causal inference (https://github.com/vdblm/CausalPFN).
Problem

Research questions and friction points this paper is trying to address.

Automates causal effect estimation from observational data
Eliminates manual estimator selection and domain expertise
Provides calibrated uncertainty for reliable decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer model for causal effect estimation
Large-scale training on simulated data
Calibrated Bayesian uncertainty estimates
🔎 Similar Papers
No similar papers found.