🤖 AI Summary
This paper addresses the optimal timing for organizations to replace an incumbent machine learning model with a challenger model reliant on novel features, under heterogeneous deployment costs, learning dynamics, and reward discounting.
Method: We develop the first theoretically grounded framework that jointly ensures statistical rigor and economic rationality for model-switching decisions. Our unified decision-making framework integrates learning curve modeling, dynamic programming, and sequential decision optimization. We further propose a forward-looking sequential algorithm with finite-sample regret guarantees that asymptotically approaches the performance of a clairvoyant oracle.
Results: Empirical evaluation on real-world credit scoring data demonstrates that the optimal switch time systematically varies with deployment cost and learning curve curvature. Our algorithm significantly outperforms baselines, achieving sublinear regret and converging in value to the oracle upper bound.
📝 Abstract
We study the problem of deciding whether, and when an organization should replace a trained incumbent model with a challenger relying on newly available features. We develop a unified economic and statistical framework that links learning-curve dynamics, data-acquisition and retraining costs, and discounting of future gains. First, we characterize the optimal switching time in stylized settings and derive closed-form expressions that quantify how horizon length, learning-curve curvature, and cost differentials shape the optimal decision. Second, we propose three practical algorithms: a one-shot baseline, a greedy sequential method, and a look-ahead sequential method. Using a real-world credit-scoring dataset with gradually arriving alternative data, we show that (i) optimal switching times vary systematically with cost parameters and learning-curve behavior, and (ii) the look-ahead sequential method outperforms other methods and is able to approach in value an oracle with full foresight. Finally, we establish finite-sample guarantees, including conditions under which the sequential look-ahead method achieve sublinear regret relative to that oracle. Our results provide an operational blueprint for economically sound model transitions as new data sources become available.