Generalized Gradient Norm Clipping&Non-Euclidean $(L_0,L_1)$-Smoothness

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of gradient descent and conditional gradient methods (Frank–Wolfe) in non-Euclidean optimization. The proposed generalized framework unifies their strengths through three key contributions: (1) the first $(L_0, L_1)$-smoothness theory, which rigorously characterizes the coupled effects of gradients and curvature in non-Euclidean spaces; (2) a generalized gradient norm clipping mechanism that guarantees strict descent under this smoothness condition; and (3) a principled integration of weight decay with the Frank–Wolfe short-step rule, augmented by momentum-based gradient estimation, achieving the optimal $O(n^{-1/4})$ convergence rate for stochastic optimization. Empirically, the method significantly improves training stability and generalization performance on image classification and language modeling benchmarks.

Technology Category

Application Category

📝 Abstract
This work introduces a hybrid non-Euclidean optimization method which generalizes gradient norm clipping by combining steepest descent and conditional gradient approaches. The method achieves the best of both worlds by establishing a descent property under a generalized notion of ($L_0$,$L_1$)-smoothness. Weight decay is incorporated in a principled manner by identifying a connection to the Frank-Wolfe short step. In the stochastic case, we show an order optimal $O(n^{-1/4})$ convergence rate by leveraging a momentum based gradient estimator. We discuss how to instantiate the algorithms for deep learning and demonstrate their properties on image classification and language modeling.
Problem

Research questions and friction points this paper is trying to address.

Generalizes gradient norm clipping for hybrid optimization
Establishes descent under non-Euclidean (L0,L1)-smoothness
Achieves optimal convergence in stochastic settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid non-Euclidean optimization method
Generalized gradient norm clipping
Momentum based gradient estimator
🔎 Similar Papers
No similar papers found.
Thomas Pethick
Thomas Pethick
PhD, EPFL
Wanyun Xie
Wanyun Xie
PhD student, EPFL
M
Mete Erdogan
EPFL (LIONS)
Kimon Antonakopoulos
Kimon Antonakopoulos
LIONS-EPFL
Convex OptimizationContinuous OptimizationVariational Inequalities
T
Tony Silveti-Falls
Université Paris-Saclay (CVN)
V
V. Cevher
EPFL (LIONS)