🤖 AI Summary
This work addresses online learning in multi-agent settings where only ordinal feedback—specifically, rankings over actions rather than numeric utilities—is available. The study investigates conditions under which sublinear regret and convergence to equilibrium can be achieved. For both full-information and bandit feedback scenarios, the authors propose adaptive algorithms grounded in the Plackett–Luce model and develop a novel analysis linking external regret to equilibrium convergence. A key finding is that instantaneous ordinal feedback alone does not guarantee sublinear regret; however, under time-averaged ordinal feedback, sublinear regret is attainable without requiring a bounded total variation assumption. The paper further establishes, for the first time, a theoretical connection between such feedback and coarse correlated equilibria, proving that when all agents adopt the proposed algorithm, their joint behavior converges to an approximate coarse correlated equilibrium. Empirical validation on large language model routing tasks demonstrates the algorithm’s practical efficacy.
📝 Abstract
Online learning in arbitrary, and possibly adversarial, environments has been extensively studied in sequential decision-making, and it is closely connected to equilibrium computation in game theory. Most existing online learning algorithms rely on \emph{numeric} utility feedback from the environment, which may be unavailable in human-in-the-loop applications and/or may be restricted by privacy concerns. In this paper, we study an online learning model in which the learner only observes a \emph{ranking} over a set of proposed actions at each timestep. We consider two ranking mechanisms: rankings induced by the \emph{instantaneous} utility at the current timestep, and rankings induced by the \emph{time-average} utility up to the current timestep, under both \emph{full-information} and \emph{bandit} feedback settings. Using the standard external-regret metric, we show that sublinear regret is impossible with instantaneous-utility ranking feedback in general. Moreover, when the ranking model is relatively deterministic, \emph{i.e.}, under the Plackett-Luce model with a temperature that is sufficiently small, sublinear regret is also impossible with time-average utility ranking feedback. We then develop new algorithms that achieve sublinear regret under the additional assumption that the utility sequence has sublinear total variation. Notably, for full-information time-average utility ranking feedback, this additional assumption can be removed. As a consequence, when all players in a normal-form game follow our algorithms, repeated play yields an approximate coarse correlated equilibrium. We also demonstrate the effectiveness of our algorithms in an online large-language-model routing task.