Competitive Algorithms for Online Knapsack with Succinct Predictions

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies the learning-augmented online knapsack problem, aiming for a near-Pareto-optimal trade-off between consistency (performance under accurate predictions) and robustness (worst-case performance under erroneous predictions). To overcome the limitation of existing methods that rely on complex, high-fidelity prediction models, we propose—first in the literature—a lightweight learning-augmented framework requiring only a single-value or interval prediction. Our approach designs a two-policy meta-algorithm based on the fractional online knapsack relaxation, ensuring strong consistency and robustness regardless of prediction reliability. We theoretically establish that our algorithm achieves a near-tight consistency–robustness trade-off bound. Experiments demonstrate that it significantly outperforms prediction-agnostic baselines and frequently surpasses state-of-the-art methods relying on high-accuracy predictors, while offering superior simplicity, generality, and practicality.

Technology Category

Application Category

📝 Abstract
In the online knapsack problem, the goal is to pack items arriving online with different values and weights into a capacity-limited knapsack to maximize the total value of the accepted items. We study extit{learning-augmented} algorithms for this problem, which aim to use machine-learned predictions to move beyond pessimistic worst-case guarantees. Existing learning-augmented algorithms for online knapsack consider relatively complicated prediction models that give an algorithm substantial information about the input, such as the total weight of items at each value. In practice, such predictions can be error-sensitive and difficult to learn. Motivated by this limitation, we introduce a family of learning-augmented algorithms for online knapsack that use emph{succinct predictions}. In particular, the machine-learned prediction given to the algorithm is just a single value or interval that estimates the minimum value of any item accepted by an offline optimal solution. By leveraging a relaxation to online emph{fractional} knapsack, we design algorithms that can leverage such succinct predictions in both the trusted setting (i.e., with perfect prediction) and the untrusted setting, where we prove that a simple meta-algorithm achieves a nearly optimal consistency-robustness trade-off. Empirically, we show that our algorithms significantly outperform baselines that do not use predictions and often outperform algorithms based on more complex prediction models.
Problem

Research questions and friction points this paper is trying to address.

Achieve near-optimal consistency-robustness trade-offs in online knapsack problems
Combine learning-augmented and worst-case algorithms for improved performance
Introduce fractional-to-integral conversion for better online algorithm design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines learning-augmented and worst-case algorithms
Uses succinct predictions for item values
Introduces fractional-to-integral conversion procedure
🔎 Similar Papers
No similar papers found.