🤖 AI Summary
This paper addresses nonconvex constrained optimization problems exhibiting implicit convexity, aiming to compute globally optimal solutions directly in the original nonconvex space—without requiring constraint qualifications or convexification transformations. We propose a unified gradient-based framework encompassing: (i) an enhanced approximate proximal point method for nonsmooth problems; (ii) a bundle method employing linearly constrained quadratic subproblems for smooth settings; and (iii) a subgradient algorithm leveraging implicit convex structure. To our knowledge, this is the first work establishing global convergence guarantees for problems with implicitly convex equality constraints. The oracle complexity is Õ(ε⁻³) for nonsmooth and Õ(ε⁻¹) for smooth instances—matching the rates of their implicitly convex unconstrained counterparts. Crucially, the final iterate is provably globally optimal, markedly overcoming classical dependencies on constraint regularity conditions and explicit convex reformulations.
📝 Abstract
Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications, including safe policy optimization in control and reinforcement learning, such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation. Typically such transformations are implicit or unknown, making the direct link with the convex program impossible. On the other hand, (sub-)gradients with respect to the original variables are often accessible or can be easily estimated, which motivates algorithms that operate directly in the original (non-convex) problem space using standard (sub-)gradient oracles. In this work, we develop the first algorithms to provably solve such non-convex problems to global minima. First, using a modified inexact proximal point method, we establish global last-iterate convergence guarantees with $widetilde{mathcal{O}}(varepsilon^{-3})$ oracle complexity in non-smooth setting. For smooth problems, we propose a new bundle-level type method based on linearly constrained quadratic subproblems, improving the oracle complexity to $widetilde{mathcal{O}}(varepsilon^{-1})$. Surprisingly, despite non-convexity, our methodology does not require any constraint qualifications, can handle hidden convex equality constraints, and achieves complexities matching those for solving unconstrained hidden convex optimization.