🤖 AI Summary
This study addresses the unresolved trade-off between investing in predictive capabilities and alternative policy instruments—such as capacity expansion or service quality improvements—in resource-scarce allocation settings. The authors propose an empirical framework integrating causal inference, counterfactual simulation, and welfare economics, and introduce rvp, the first operational open-source toolkit for quantifying the marginal welfare effects of prediction in resource allocation and enabling cross-context policy comparisons. The framework’s validity is demonstrated through two empirical applications: job placement services in Germany and poverty targeting in Ethiopia. Results reveal that the welfare value of prediction is highly context-dependent, offering policymakers a scalable benchmark for evaluating and prioritizing interventions under constrained resources.
📝 Abstract
Institutions increasingly use prediction to allocate scarce resources. From a design perspective, better predictions compete with other investments, such as expanding capacity or improving treatment quality. Here, the big question is not how to solve a specific allocation problem, but rather which problem to solve. In this work, we develop an empirical toolkit to help planners form principled answers to this question and quantify the bottom-line welfare impact of investments in prediction versus other policy levers such as expanding capacity and improving treatment quality. Applying our framework in two real-world case studies on German employment services and poverty targeting in Ethiopia, we illustrate how decision-makers can reliably derive context-specific conclusions about the relative value of prediction in their allocation problem. We make our software toolkit, rvp, and parts of our data available in order to enable future empirical work in this area.