Near-Optimal Algorithms for Omniprediction

πŸ“… 2025-01-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the construction of universal predictors: achieving near-optimal prediction performance across all loss functions in a given family β„’ (including Lipschitz and bounded-variation losses) and hypothesis class β„‹, under both online and offline settings. We propose the first online omniprediction framework, attaining an Γ•(√T) regret boundβ€”matching the optimal rate for single-loss online learning. We further design an efficient online-to-offline reduction that, for the first time, accommodates infinite hypothesis classes and complex loss families. Our theoretical analysis integrates Rademacher complexity, ERM oracle calls, and loss-specific generalization theory. The offline algorithm outputs an (β„’, β„‹, Ξ΅)-omnipredictor with near-linear sample complexity, where Ξ΅ is precisely characterized by the Rademacher complexity of the composite function class β„’ ∘ β„‹ over β„‹.

Technology Category

Application Category

πŸ“ Abstract
Omnipredictors are simple prediction functions that encode loss-minimizing predictions with respect to a hypothesis class $H$, simultaneously for every loss function within a class of losses $L$. In this work, we give near-optimal learning algorithms for omniprediction, in both the online and offline settings. To begin, we give an oracle-efficient online learning algorithm that acheives $(L,H)$-omniprediction with $ ilde{O}(sqrt{T log |H|})$ regret for any class of Lipschitz loss functions $L subseteq L_mathrm{Lip}$. Quite surprisingly, this regret bound matches the optimal regret for emph{minimization of a single loss function} (up to a $sqrt{log(T)}$ factor). Given this online algorithm, we develop an online-to-offline conversion that achieves near-optimal complexity across a number of measures. In particular, for all bounded loss functions within the class of Bounded Variation losses $L_mathrm{BV}$ (which include all convex, all Lipschitz, and all proper losses) and any (possibly-infinite) $H$, we obtain an offline learning algorithm that, leveraging an (offline) ERM oracle and $m$ samples from $D$, returns an efficient $(L_{mathrm{BV}},H,eps(m))$-omnipredictor for $eps(m)$ scaling near-linearly in the Rademacher complexity of $mathrm{Th} circ H$.
Problem

Research questions and friction points this paper is trying to address.

Optimal Learning
Prediction Methods
Bounded Loss Functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal Learning Method
Smooth Loss Function
Online-Offline Learning Adaptability
πŸ”Ž Similar Papers
No similar papers found.