🤖 AI Summary
Existing causal inference frameworks—such as the potential outcomes model—rely on untestable counterfactual assumptions and abstract probability distributions, hindering verifiable, population-specific decision support. This paper proposes a novel paradigm: “finite-population treatment effect prediction modeling,” which treats the target population as the fundamental modeling unit and recasts causal inference as an empirically falsifiable prediction problem. It abandons unverifiable independence assumptions, rigorously distinguishes statistical from scientific inference, and introduces three core methodological innovations: analyzable treatment assignment mechanisms, error-source diagnostics, and fully testable causal modeling. For the first time, this framework enables systematic empirical validation of causal assumptions, exposes the fundamental dependence of causal conclusions on model specification, and substantially enhances transparency, reproducibility, and policy relevance of causal reasoning.
📝 Abstract
The most common approach to causal modelling is the potential outcomes framework due to Neyman and Rubin. In this framework, outcomes of counterfactual treatments are assumed to be well-defined. This metaphysical assumption is often thought to be problematic yet indispensable. The conventional approach relies not only on counterfactuals but also on abstract notions of distributions and assumptions of independence that are not directly testable. In this paper, we construe causal inference as treatment-wise predictions for finite populations where all assumptions are testable; this means that one can not only test predictions themselves (without any fundamental problem) but also investigate sources of error when they fail. The new framework highlights the model-dependence of causal claims as well as the difference between statistical and scientific inference.