🤖 AI Summary
This paper addresses the construction of ε-minimax decision rules in statistical decision theory—i.e., decision rules whose maximum risk deviates from the minimax optimum by at most ε. We propose a convex optimization framework based on randomized decision rules, reformulating the problem as a convex optimization over the (I−1)-simplex. A provably convergent mirror subgradient descent algorithm—equivalently, the Hedge algorithm—is developed, marking the first systematic incorporation of online learning’s Hedge principle into econometric minimax decision problems. Our approach relaxes conventional structural constraints on the rule class, enabling efficient approximation in complex, high-dimensional decision spaces. Theoretical guarantees and practical efficacy are validated across canonical econometric models and a policy experiment design task—specifically, external validity maximization via optimal site selection.
📝 Abstract
A decision rule is epsilon-minimax if it is minimax up to an additive factor epsilon. We present an algorithm for provably obtaining epsilon-minimax solutions of statistical decision problems. We are interested in problems where the statistician chooses randomly among I decision rules. The minimax solution of these problems admits a convex programming representation over the (I-1)-simplex. Our suggested algorithm is a well-known mirror subgradient descent routine, designed to approximately solve the convex optimization problem that defines the minimax decision rule. This iterative routine is known in the computer science literature as the hedge algorithm and it is used in algorithmic game theory as a practical tool to find approximate solutions of two-person zero-sum games. We apply the suggested algorithm to different minimax problems in the econometrics literature. An empirical application to the problem of optimally selecting sites to maximize the external validity of an experimental policy evaluation illustrates the usefulness of the suggested procedure.