🤖 AI Summary
This paper addresses the ℓ∞ regression problem. Method: We propose the first randomized Multiplicative Weight Update (MWU) algorithm that integrates acceleration techniques with a matrix inverse maintenance data structure. Our approach introduces a novel accelerated MWU scheme designed for numerical stability and robustness, and achieves— for the first time—the efficient synergy between accelerated iterations and inverse maintenance. Theoretical analysis breaks free from the interior-point method (IPM) framework, yielding stronger stability guarantees. Results: Assuming ω = 2 + o(1), our algorithm improves the randomized and deterministic time complexities of ℓ∞ regression from Õ(n²⁺¹⁄₁₈) to Õ(n²⁺¹⁄₂₂.₅) and Õ(n²⁺¹⁄₁₂), respectively. This substantially enhances efficiency for low-accuracy solutions and establishes a new algorithmic tool and theoretical foundation for structured convex optimization.
📝 Abstract
We propose a randomized multiplicative weight update (MWU) algorithm for $ell_{infty}$ regression that runs in $widetilde{O}left(n^{2+1/22.5} ext{poly}(1/epsilon)
ight)$ time when $omega = 2+o(1)$, improving upon the previous best $widetilde{O}left(n^{2+1/18} ext{poly} log(1/epsilon)
ight)$ runtime in the low-accuracy regime. Our algorithm combines state-of-the-art inverse maintenance data structures with acceleration. In order to do so, we propose a novel acceleration scheme for MWU that exhibits {it stabiliy} and {it robustness}, which are required for the efficient implementations of the inverse maintenance data structures. We also design a faster {it deterministic} MWU algorithm that runs in $widetilde{O}left(n^{2+1/12} ext{poly}(1/epsilon)
ight))$ time when $omega = 2+o(1)$, improving upon the previous best $widetilde{O}left(n^{2+1/6} ext{poly} log(1/epsilon)
ight)$ runtime in the low-accuracy regime. We achieve this by showing a novel stability result that goes beyond previously known works based on interior point methods (IPMs). Our work is the first to use acceleration and inverse maintenance together efficiently, finally making the two most important building blocks of modern structured convex optimization compatible.