🤖 AI Summary
This work addresses the efficient optimization of Hölder-continuous, $q$-times differentiable convex functions in non-Euclidean spaces, under arbitrary norms—including $ell_p$ norms for $1 leq p leq infty$—and with access to an inexact spherical optimization oracle. We propose a non-Euclidean inexact accelerated proximal point method coupled with inexact uniformly convex regularization. Our key contribution is the first information-theoretic lower bound for high-dimensional convex optimization that applies to general norms and all orders $q geq 1$, resolving a long-standing open problem in parallel convex optimization. The proposed algorithm achieves nearly optimal convergence rates, tightly matching our lower bound in both $ell_p$ settings and stochastic/parallel computation models. Notably, this is the first framework unifying high-order smooth convex optimization for all $q geq 1$, delivering both theoretical completeness and practical applicability.
📝 Abstract
We develop algorithms for the optimization of convex objectives that have H""older continuous $q$-th derivatives by using a $q$-th order oracle, for any $q geq 1$. Our algorithms work for general norms under mild conditions, including the $ell_p$-settings for $1leq pleq infty$. We can also optimize structured functions that allow for inexactly implementing a non-Euclidean ball optimization oracle. We do this by developing a non-Euclidean inexact accelerated proximal point method that makes use of an emph{inexact uniformly convex regularizer}. We show a lower bound for general norms that demonstrates our algorithms are nearly optimal in high-dimensions in the black-box oracle model for $ell_p$-settings and all $q geq 1$, even in randomized and parallel settings. This new lower bound, when applied to the first-order smooth case, resolves an open question in parallel convex optimization.