🤖 AI Summary
This paper investigates statistical decision-making for treatment choice under partial identification, focusing on theoretical properties and practical challenges of welfare maximization and regret minimization within the Gaussian likelihood framework. We propose the “profile regret” criterion to systematically distinguish decision rules and uniquely characterize the minimax-regret-optimal nonrandomized decision rule—its first such characterization. Key theoretical findings include: (i) discarding certain data may improve minimax welfare performance; and (ii) under strong partial identification, infinitely many randomized rules achieve minimax regret optimality. The method is validated across canonical settings—including experimental estimate aggregation, LATE extrapolation, and omitted-variable bias correction—demonstrating robustness and practical applicability. Our work establishes a novel theoretical foundation and provides implementable tools for robust policy decision-making under partial identification.
📝 Abstract
We apply classical statistical decision theory to a large class of treatment choice problems with partial identification, revealing important theoretical and practical challenges but also interesting research opportunities. The challenges are: In a general class of problems with Gaussian likelihood, all decision rules are admissible; it is maximin-welfare optimal to ignore all data; and, for severe enough partial identification, there are infinitely many minimax-regret optimal decision rules, all of which sometimes randomize the policy recommendation. The opportunities are: We introduce a profiled regret criterion that can reveal important differences between rules and render some of them inadmissible; and we uniquely characterize the minimax-regret optimal rule that least frequently randomizes. We apply our results to aggregation of experimental estimates for policy adoption, to extrapolation of Local Average Treatment Effects, and to policy making in the presence of omitted variable bias.