🤖 AI Summary
Predict-then-optimize (PTO) frameworks suffer from systematic decision bias when the objective function exhibits asymmetric sensitivity to estimation errors—particularly because they neglect downstream optimization structure. To address this, we propose a data-driven post-estimation adjustment method that constructs a closed-form correction term based on the ratio of the objective’s second- and third-order derivatives, thereby calibrating parameter estimation bias while preserving PTO’s modularity. We establish theoretical consistency and asymptotic superiority of the method across broad classes of pricing models. Empirically, it significantly improves revenue in challenging regimes—including small-sample settings, new-product introduction, and sparse price changes. Our key innovation lies in the first integration of curvature analysis into PTO bias correction, achieving a principled balance among interpretability, generality, and computational efficiency.
📝 Abstract
The predict-then-optimize (PTO) framework is a standard approach in data-driven decision-making, where a decision-maker first estimates an unknown parameter from historical data and then uses this estimate to solve an optimization problem. While widely used for its simplicity and modularity, PTO can lead to suboptimal decisions because the estimation step does not account for the structure of the downstream optimization problem. We study a class of problems where the objective function, evaluated at the PTO decision, is asymmetric with respect to estimation errors. This asymmetry causes the expected outcome to be systematically degraded by noise in the parameter estimate, as the penalty for underestimation differs from that of overestimation. To address this, we develop a data-driven post-estimation adjustment that improves decision quality while preserving the practicality and modularity of PTO. We show that when the objective function satisfies a particular curvature condition, based on the ratio of its third and second derivatives, the adjustment simplifies to a closed-form expression. This condition holds for a broad range of pricing problems, including those with linear, log-linear, and power-law demand models. Under this condition, we establish theoretical guarantees that our adjustment uniformly and asymptotically outperforms standard PTO, and we precisely characterize the resulting improvement. Additionally, we extend our framework to multi-parameter optimization and settings with biased estimators. Numerical experiments demonstrate that our method consistently improves revenue, particularly in small-sample regimes where estimation uncertainty is most pronounced. This makes our approach especially well-suited for pricing new products or in settings with limited historical price variation.