🤖 AI Summary
This paper addresses constrained nonconvex optimization problems involving both equality and inequality constraints. We propose an Adaptive Proximal Augmented Lagrangian Method (AP-ALM), which introduces a novel joint adaptive update rule for the penalty parameter and proximal term: rapid growth in early iterations to accelerate convergence, followed by controlled damping in later stages to mitigate ill-conditioning. Coupled with an inexact subproblem solver, the method ensures global convergence under mild assumptions. Theoretically, AP-ALM inherits the convergence guarantees of the classical Augmented Lagrangian Method (ALM) while relaxing the requirement for exact subproblem solutions. Numerical experiments demonstrate its robustness and efficiency on both convex and nonconvex benchmarks, achieving significantly faster convergence rates and enhanced numerical stability compared to state-of-the-art alternatives.
📝 Abstract
We propose an inexact proximal augmented Lagrangian method (P-ALM) for nonconvex structured optimization problems. The proposed method features an easily implementable rule not only for updating the penalty parameters, but also for adaptively tuning the proximal term. It allows the penalty parameter to grow rapidly in the early stages to speed up progress, while ameliorating the issue of ill-conditioning in later iterations, a well-known drawback of the traditional approach of linearly increasing the penalty parameters. A key element in our analysis lies in the observation that the augmented Lagrangian can be controlled effectively along the iterates, provided an initial feasible point is available. Our analysis, while simple, provides a new theoretical perspective about P-ALM and, as a by-product, results in similar convergence properties for its non-proximal variant, the classical augmented Lagrangian method (ALM). Numerical experiments, including convex and nonconvex problem instances, demonstrate the effectiveness of our approach.