Stochastic Smoothed Primal-Dual Algorithms for Nonconvex Optimization with Linear Inequality Constraints

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses stochastic nonconvex optimization with linear inequality constraints. We propose a single-loop, single-sample-gradient smoothed primal-dual algorithm that builds an inexact gradient descent framework based on the Moreau envelope, and—crucially—integrates Moreau-envelope analysis with stochastic primal-dual augmented Lagrangian methods for the first time. The algorithm avoids solving subproblems, large-batch sampling, or increasing penalty parameters; feasibility is inherently ensured via coupled primal-dual variable updates. Under an expected smoothness assumption, our theoretical analysis—combining global error bounds with variance reduction techniques—achieves an optimal sample complexity of $O(varepsilon^{-4})$, improved to $O(varepsilon^{-3})$ with refinement, substantially outperforming existing methods. Moreover, the algorithm handles stochastic linear constraints directly, offering both strong convergence guarantees and practical applicability.

Technology Category

Application Category

📝 Abstract
We propose smoothed primal-dual algorithms for solving stochastic and smooth nonconvex optimization problems with linear inequality constraints. Our algorithms are single-loop and only require a single stochastic gradient based on one sample at each iteration. A distinguishing feature of our algorithm is that it is based on an inexact gradient descent framework for the Moreau envelope, where the gradient of the Moreau envelope is estimated using one step of a stochastic primal-dual augmented Lagrangian method. To handle inequality constraints and stochasticity, we combine the recently established global error bounds in constrained optimization with a Moreau envelope-based analysis of stochastic proximal algorithms. For obtaining $varepsilon$-stationary points, we establish the optimal $O(varepsilon^{-4})$ sample complexity guarantee for our algorithms and provide extensions to stochastic linear constraints. We also show how to improve this complexity to $O(varepsilon^{-3})$ by using variance reduction and the expected smoothness assumption. Unlike existing methods, the iterations of our algorithms are free of subproblems, large batch sizes or increasing penalty parameters and use dual variable updates to ensure feasibility.
Problem

Research questions and friction points this paper is trying to address.

Solve stochastic nonconvex optimization with linear inequality constraints
Develop single-loop primal-dual algorithms using stochastic gradients
Achieve optimal sample complexity for ε-stationary points
Innovation

Methods, ideas, or system contributions that make the work stand out.

Smoothed primal-dual algorithms for nonconvex optimization
Single-loop with single stochastic gradient per iteration
Moreau envelope gradient estimation via primal-dual method
🔎 Similar Papers
R
Ruichuan Huang
Department of Mathematics, University of British Columbia
J
Jiawei Zhang
Laboratory of Information and Decision Systems, Massachusetts Institute of Technology
Ahmet Alacaoglu
Ahmet Alacaoglu
University of British Columbia
optimizationmachine learning