🤖 AI Summary
In causal inference, linear smoothers face a fundamental trade-off among covariate balance, model assumption dependence, and estimation variance: negative weights alleviate imbalance but exacerbate model misspecification risk and variance, whereas nonnegativity constraints yield the opposite effect. This paper proposes a unified extrapolation regularization framework that replaces hard nonnegativity constraints with soft penalization, establishing—for the first time—a “bias–bias–variance” trilemma theory. It formally characterizes the intrinsic conflict among covariate imbalance, model misspecification, and estimation variance in high dimensions. Leveraging linear smoother theory, we integrate kernel ridge regression with importance weighting to design an adjustable regularization algorithm that jointly optimizes covariate balance and extrapolation error. Empirical evaluation on synthetic data and real-world randomized controlled trial (RCT) extrapolation tasks demonstrates substantial reductions in extrapolation bias and improved robustness and accuracy of treatment effect estimation for target populations.
📝 Abstract
Many common estimators in machine learning and causal inference are linear smoothers, where the prediction is a weighted average of the training outcomes. Some estimators, such as ordinary least squares and kernel ridge regression, allow for arbitrarily negative weights, which improve feature imbalance but often at the cost of increased dependence on parametric modeling assumptions and higher variance. By contrast, estimators like importance weighting and random forests (sometimes implicitly) restrict weights to be non-negative, reducing dependence on parametric modeling and variance at the cost of worse imbalance. In this paper, we propose a unified framework that directly penalizes the level of extrapolation, replacing the current practice of a hard non-negativity constraint with a soft constraint and corresponding hyperparameter. We derive a worst-case extrapolation error bound and introduce a novel "bias-bias-variance" tradeoff, encompassing biases due to feature imbalance, model misspecification, and estimator variance; this tradeoff is especially pronounced in high dimensions, particularly when positivity is poor. We then develop an optimization procedure that regularizes this bound while minimizing imbalance and outline how to use this approach as a sensitivity analysis for dependence on parametric modeling assumptions. We demonstrate the effectiveness of our approach through synthetic experiments and a real-world application, involving the generalization of randomized controlled trial estimates to a target population of interest.