🤖 AI Summary
This paper studies the online unweighted bipartite matching problem under stochastic arrival orders, leveraging unreliable predictions of online vertices’ neighborhoods within a learning-augmented framework. We propose a prefix-sampling mechanism to assess prediction quality and adaptively switch between strategies: following predictions under high confidence, and falling back to a β-competitive baseline algorithm under low confidence. For the first time—without assuming the optimal matching size equals *n*—we simultaneously achieve (1−o(1))-consistency and (β−o(1))-robustness. Our key innovation is an error-smoothed competitive ratio design: the algorithm’s performance degrades gracefully as prediction error increases, rather than collapsing abruptly. This ensures enhanced stability and generalization in practical settings where predictions are inherently noisy and imperfect.
📝 Abstract
We study the online unweighted bipartite matching problem in the random arrival order model, with $n$ offline and $n$ online vertices, in the learning-augmented setting: The algorithm is provided with untrusted predictions of the types (neighborhoods) of the online vertices. We build upon the work of Choo et al. (ICML 2024, pp. 8762-8781) who proposed an approach that uses a prefix of the arrival sequence as a sample to determine whether the predictions are close to the true arrival sequence and then either follows the predictions or uses a known baseline algorithm that ignores the predictions and is $β$-competitive. Their analysis is limited to the case that the optimal matching has size $n$, i.e., every online vertex can be matched. We generalize their approach and analysis by removing any assumptions on the size of the optimal matching while only requiring that the size of the predicted matching is at least $αn$ for any constant $0 < αle 1$. Our learning-augmented algorithm achieves $(1-o(1))$-consistency and $(β-o(1))$-robustness. Additionally, we show that the competitive ratio degrades smoothly between consistency and robustness with increasing prediction error.