š¤ AI Summary
This work addresses fixed-point computation for non-self-mappings. We systematically transform online regret minimization algorithmsāparticularly Online Gradient Descent (OGD) and AdaGrad-type methodsāinto fixed-point iteration schemes with provable convergence guarantees. The proposed framework generalizes the classical KrasnoselskiiāMann iteration and, for the first time, introduces regret analysis from online optimization to derive adaptive fixed-point updates in the non-self-mapping setting. Theoretically, under standard assumptions (e.g., cocoercivity or monotonicity), our methods achieve strong convergence and exhibit explicit iteration complexity bounds. Empirically, the AdaGrad-based adaptive scheme significantly outperforms the standard KrasnoselskiiāMann method on challenging nonsmooth and nonmonotone fixed-point problemsāincluding variational inequalities and implicit deep network solvingādemonstrating both accelerated convergence and enhanced robustness.
š Abstract
We propose a conversion scheme that turns regret minimizing algorithms into fixed point iterations, with convergence guarantees following from regret bounds. The resulting iterations can be seen as a grand extension of the classical Krasnoselskii--Mann iterations, as the latter are recovered by converting the Online Gradient Descent algorithm. This approach yields new simple iterations for finding fixed points of non-self operators. We also focus on converting algorithms from the AdaGrad family of regret minimizers, and thus obtain fixed point iterations with adaptive guarantees of a new kind. Numerical experiments on various problems demonstrate faster convergence of AdaGrad-based fixed point iterations over Krasnoselskii--Mann iterations.