Partial identification via conditional linear programs: estimation and policy learning

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of partial identification—where parameters in causal and decision-making problems are only constrained to a set by observed data—by proposing a unified framework based on conditional linear programming (CLP) for estimation, inference, and policy learning under covariate-dependent linear constraints. We introduce two novel debiased estimators: (i) a plug-in debiased estimator constructed directly from standard LP solver outputs, avoiding computationally expensive vertex enumeration; and (ii) an entropy-regularized differentiable approximation of CLP, balancing statistical accuracy and computational efficiency. We establish first-order robustness and asymptotic normality of the estimators, enabling Wald-type confidence interval construction. Empirically, the method enables valid policy evaluation under partial identification, as demonstrated in an application estimating the causal effect of Medicaid enrollment.

Technology Category

Application Category

📝 Abstract
Many important quantities of interest are only partially identified from observable data: the data can limit them to a set of plausible values, but not uniquely determine them. This paper develops a unified framework for covariate-assisted estimation, inference, and decision making in partial identification problems where the parameter of interest satisfies a series of linear constraints, conditional on covariates. In such settings, bounds on the parameter can be written as expectations of solutions to conditional linear programs that optimize a linear function subject to linear constraints, where both the objective function and the constraints may depend on covariates and need to be estimated from data. Examples include estimands involving the joint distributions of potential outcomes, policy learning with inequality-aware value functions, and instrumental variable settings. We propose two de-biased estimators for bounds defined by conditional linear programs. The first directly solves the conditional linear programs with plugin estimates and uses output from standard LP solvers to de-bias the plugin estimate, avoiding the need for computationally demanding vertex enumeration of all possible solutions for symbolic bounds. The second uses entropic regularization to create smooth approximations to the conditional linear programs, trading a small amount of approximation error for improved estimation and computational efficiency. We establish conditions for asymptotic normality of both estimators, show that both estimators are robust to first-order errors in estimating the conditional constraints and objectives, and construct Wald-type confidence intervals for the partially identified parameters. These results also extend to policy learning problems where the value of a decision policy is only partially identified. We apply our methods to a study on the effects of Medicaid enrollment.
Problem

Research questions and friction points this paper is trying to address.

Estimating partially identified quantities with linear constraints
Developing debiased estimators for conditional linear programs
Applying methods to policy learning with partial identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conditional linear programs for partial identification
De-biased estimators using LP solvers
Entropic regularization for smooth approximations
🔎 Similar Papers
No similar papers found.