High-Dimensional Calibration from Swap Regret

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies online calibration for multidimensional forecasting over an arbitrary convex set $P subset mathbb{R}^d$ under an arbitrary norm $|cdot|$. It establishes, for the first time, a tight theoretical connection between calibration error and swap regret in online linear optimization, unifying high-dimensional calibration upper bounds via swap regret. The authors propose a generic calibration algorithm—based on TreeSwap and Follow-the-Leader (FTL) subroutines, augmented with dual-norm analysis—that requires no regularization, prior knowledge of the optimal calibration rate $ ho$, or assumptions on norm structure. On the $d$-dimensional probability simplex, the algorithm achieves $varepsilon$-calibration in $T = d^{O(1/varepsilon^2)}$ rounds. Moreover, the paper provides the first tight lower bound of $exp(mathrm{poly}(1/varepsilon))$, proving that exponential dependence on $1/varepsilon$ is unavoidable—significantly strengthening prior results.

Technology Category

Application Category

📝 Abstract
We study the online calibration of multi-dimensional forecasts over an arbitrary convex set $mathcal{P} subset mathbb{R}^d$ relative to an arbitrary norm $VertcdotVert$. We connect this with the problem of external regret minimization for online linear optimization, showing that if it is possible to guarantee $O(sqrt{ ho T})$ worst-case regret after $T$ rounds when actions are drawn from $mathcal{P}$ and losses are drawn from the dual $Vert cdot Vert_*$ unit norm ball, then it is also possible to obtain $epsilon$-calibrated forecasts after $T = exp(O( ho /epsilon^2))$ rounds. When $mathcal{P}$ is the $d$-dimensional simplex and $Vert cdot Vert$ is the $ell_1$-norm, the existence of $O(sqrt{Tlog d})$-regret algorithms for learning with experts implies that it is possible to obtain $epsilon$-calibrated forecasts after $T = exp(O(log{d}/epsilon^2)) = d^{O(1/epsilon^2)}$ rounds, recovering a recent result of Peng (2025). Interestingly, our algorithm obtains this guarantee without requiring access to any online linear optimization subroutine or knowledge of the optimal rate $ ho$ -- in fact, our algorithm is identical for every setting of $mathcal{P}$ and $Vert cdot Vert$. Instead, we show that the optimal regularizer for the above OLO problem can be used to upper bound the above calibration error by a swap regret, which we then minimize by running the recent TreeSwap algorithm with Follow-The-Leader as a subroutine. Finally, we prove that any online calibration algorithm that guarantees $epsilon T$ $ell_1$-calibration error over the $d$-dimensional simplex requires $T geq exp(mathrm{poly}(1/epsilon))$ (assuming $d geq mathrm{poly}(1/epsilon)$). This strengthens the corresponding $d^{Omega(log{1/epsilon})}$ lower bound of Peng, and shows that an exponential dependence on $1/epsilon$ is necessary.
Problem

Research questions and friction points this paper is trying to address.

Online calibration of multi-dimensional forecasts in convex sets
Connection between calibration error and swap regret minimization
Lower bound on calibration error for simplex and l1-norm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Connects calibration with external regret minimization
Uses TreeSwap algorithm for minimizing swap regret
Proves exponential lower bound for calibration error
🔎 Similar Papers
No similar papers found.