Cross-Balancing for Data-Informed Design and Efficient Analysis of Observational Studies

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the dual challenges of covariate balance and causal effect estimation in observational studies. We propose Cross-Balancing—a method that decouples feature learning from weight estimation via sample splitting, enabling adaptive selection or learning of balancing features from high-dimensional candidate variables. The approach integrates covariate balancing, high-dimensional variable screening, and plug-in bias correction, achieving both multiple robustness and asymptotic efficiency. Theoretically, the proposed estimator is shown to be consistent, asymptotically normal, and semiparametrically efficient—attaining the semiparametric efficiency bound. Empirically, Cross-Balancing significantly improves estimation accuracy and inferential power while preserving statistical validity and model interpretability.

Technology Category

Application Category

📝 Abstract
Causal inference starts with a simple idea: compare groups that differ by treatment, not much else. Traditionally, similar groups are constructed using only observed covariates; however, it remains a long-standing challenge to incorporate available outcome data into the study design while preserving valid inference. In this paper, we study the general problem of covariate adjustment, effect estimation, and statistical inference when balancing features are constructed or selected with the aid of outcome information from the data. We propose cross-balancing, a method that uses sample splitting to separate the error in feature construction from the error in weight estimation. Our framework addresses two cases: one where the features are learned functions and one where they are selected from a potentially high-dimensional dictionary. In both cases, we establish mild and general conditions under which cross-balancing produces consistent, asymptotically normal, and efficient estimators. In the learned-function case, cross-balancing achieves finite-sample bias reduction relative to plug-in-type estimators, and is multiply robust when the learned features converge at slow rates. In the variable-selection case, cross-balancing only requires a product condition on how well the selected variables approximate true functions. We illustrate cross-balancing in extensive simulations and an observational study, showing that careful use of outcome information can substantially improve both estimation and inference while maintaining interpretability.
Problem

Research questions and friction points this paper is trying to address.

Addresses covariate adjustment and effect estimation using outcome data
Ensures valid statistical inference when balancing features use outcome information
Handles high-dimensional feature selection and learned function cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-balancing uses sample splitting for feature construction
Method separates feature error from weight estimation error
Produces consistent and efficient estimators in observational studies
🔎 Similar Papers
No similar papers found.
Y
Ying Jin
Department of Statistics and Data Science, University of Pennsylvania, Academic Research Building, Room 441, Philadelphia, PA 19104
José R. Zubizarreta
José R. Zubizarreta
Professor, Harvard University
Causal inferenceObservational studiesApplications of statistics to health care and public policy