π€ AI Summary
This work addresses smooth convex optimization under the ββ norm, breaking the traditional reliance of acceleration methods on Euclidean geometry and achieving, for the first time, dimension-adaptive first-order acceleration. We propose a novel framework coupling heterogeneous-norm duality iteration with implicit interpolation, built upon primal-dual sequence design, non-Euclidean smoothness analysis, and ββ-specific first-order oracle optimization. For d-dimensional ββ-smooth functions, our method reduces the first-order oracle complexity to O(d^{1β2/p}). This bound strictly improves upon classical accelerated ratesβe.g., O(d^{1/2})βand resolves a long-standing bottleneck in dimension dependence. Our result establishes the first universal, tight acceleration theory for non-Euclidean optimization, providing both conceptual clarity and practical efficiency gains for high-dimensional ββ-structured problems.
π Abstract
Recent advances (Sherman, 2017; Sidford and Tian, 2018; Cohen et al., 2021) have overcome the fundamental barrier of dimension dependence in the iteration complexity of solving $ell_infty$ regression with first-order methods. Yet it remains unclear to what extent such acceleration can be achieved for general $ell_p$ smooth functions. In this paper, we propose a new accelerated first-order method for convex optimization under non-Euclidean smoothness assumptions. In contrast to standard acceleration techniques, our approach uses primal-dual iterate sequences taken with respect to $ extit{differing}$ norms, which are then coupled using an $ extit{implicitly}$ determined interpolation parameter. For $ell_p$ norm smooth problems in $d$ dimensions, our method provides an iteration complexity improvement of up to $O(d^{1-frac{2}{p}})$ in terms of calls to a first-order oracle, thereby allowing us to circumvent long-standing barriers in accelerated non-Euclidean steepest descent.