Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies the learning-augmented one-max search problem, aiming to jointly optimize algorithmic consistency (responsiveness to accurate predictions) and robustness (worst-case performance guarantees). Addressing realistic settings where both price sequences and predictions are stochastic, we propose the first online algorithmic framework that simultaneously achieves smooth design and theoretical optimality. Our approach attains Pareto-optimal trade-offs between consistency and robustness—achieving simultaneous optimality in both criteria—and establishes, for the first time, a competitive ratio analysis accommodating *both* prediction error and price process randomness. Methodologically, we integrate competitive analysis, smooth algorithm design, and stochastic process modeling, thereby overcoming a fundamental theoretical bottleneck in prior learning-augmented online algorithms: the inherent consistency–robustness trade-off. The resulting algorithm provides rigorous performance guarantees under uncertainty while adapting gracefully to prediction quality.

Technology Category

Application Category

📝 Abstract
One-max search is a classic problem in online decision-making, in which a trader acts on a sequence of revealed prices and accepts one of them irrevocably to maximise its profit. The problem has been studied both in probabilistic and in worst-case settings, notably through competitive analysis, and more recently in learning-augmented settings in which the trader has access to a prediction on the sequence. However, existing approaches either lack smoothness, or do not achieve optimal worst-case guarantees: they do not attain the best possible trade-off between the consistency and the robustness of the algorithm. We close this gap by presenting the first algorithm that simultaneously achieves both of these important objectives. Furthermore, we show how to leverage the obtained smoothness to provide an analysis of one-max search in stochastic learning-augmented settings which capture randomness in both the observed prices and the prediction.
Problem

Research questions and friction points this paper is trying to address.

Optimizing one-max search algorithms
Balancing consistency and robustness
Analyzing stochastic learning-augmented settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pareto-Optimality
Smoothness
Stochastic Learning-Augmented
🔎 Similar Papers
No similar papers found.