Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses stochastic nonconvex optimization with weakly convex objectives and weakly convex constraints, aiming to overcome limitations of existing double-loop algorithms—such as slow convergence and reliance on diminishing penalty parameters. We propose the first single-loop exact penalty method based on the hinge function, eliminating the need for inner-loop optimization, supporting a constant penalty parameter, and ensuring efficient approximation of KKT points. Theoretically, our algorithm achieves the state-of-the-art stochastic subgradient complexity of $mathcal{O}(1/varepsilon^4)$, and—crucially—provides the first optimal complexity guarantee for exact penalization under weakly convex constraints. The framework naturally extends to finite-sum and composite-coupled objective structures. Experiments on ROC fairness learning and continual learning with non-forgetting constraints demonstrate significant improvements over baselines in both convergence speed and constraint satisfaction accuracy.

Technology Category

Application Category

📝 Abstract
Constrained optimization with multiple functional inequality constraints has significant applications in machine learning. This paper examines a crucial subset of such problems where both the objective and constraint functions are weakly convex. Existing methods often face limitations, including slow convergence rates or reliance on double-loop algorithmic designs. To overcome these challenges, we introduce a novel single-loop penalty-based stochastic algorithm. Following the classical exact penalty method, our approach employs a {f hinge-based penalty}, which permits the use of a constant penalty parameter, enabling us to achieve a {f state-of-the-art complexity} for finding an approximate Karush-Kuhn-Tucker (KKT) solution. We further extend our algorithm to address finite-sum coupled compositional objectives, which are prevalent in artificial intelligence applications, establishing improved complexity over existing approaches. Finally, we validate our method through experiments on fair learning with receiver operating characteristic (ROC) fairness constraints and continual learning with non-forgetting constraints.
Problem

Research questions and friction points this paper is trying to address.

Solves weakly-convex constrained stochastic optimization problems
Improves convergence with single-loop penalty-based algorithm
Extends to finite-sum compositional objectives in AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-loop penalty-based stochastic algorithm
Hinge-based penalty with constant parameter
Improved complexity for KKT solutions
🔎 Similar Papers
No similar papers found.
M
Ming Yang
Department of Computer Science & Engineering, Texas A&M University, College Station, USA.
G
Gang Li
Department of Computer Science & Engineering, Texas A&M University, College Station, USA.
Quanqi Hu
Quanqi Hu
Meta
OptimizationMachine learning
Qihang Lin
Qihang Lin
The University of Iowa
Continuous optimizationStochastic OptimizationMachine LearningMarkov Decision Process
Tianbao Yang
Tianbao Yang
Texas A&M University
machine learningstochastic optimization