Global Solutions to Non-Convex Functional Constrained Problems with Hidden Convexity

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses nonconvex constrained optimization problems exhibiting implicit convexity, aiming to compute globally optimal solutions directly in the original nonconvex space—without requiring constraint qualifications or convexification transformations. We propose a unified gradient-based framework encompassing: (i) an enhanced approximate proximal point method for nonsmooth problems; (ii) a bundle method employing linearly constrained quadratic subproblems for smooth settings; and (iii) a subgradient algorithm leveraging implicit convex structure. To our knowledge, this is the first work establishing global convergence guarantees for problems with implicitly convex equality constraints. The oracle complexity is Õ(ε⁻³) for nonsmooth and Õ(ε⁻¹) for smooth instances—matching the rates of their implicitly convex unconstrained counterparts. Crucially, the final iterate is provably globally optimal, markedly overcoming classical dependencies on constraint regularity conditions and explicit convex reformulations.

Technology Category

Application Category

📝 Abstract
Constrained non-convex optimization is fundamentally challenging, as global solutions are generally intractable and constraint qualifications may not hold. However, in many applications, including safe policy optimization in control and reinforcement learning, such problems possess hidden convexity, meaning they can be reformulated as convex programs via a nonlinear invertible transformation. Typically such transformations are implicit or unknown, making the direct link with the convex program impossible. On the other hand, (sub-)gradients with respect to the original variables are often accessible or can be easily estimated, which motivates algorithms that operate directly in the original (non-convex) problem space using standard (sub-)gradient oracles. In this work, we develop the first algorithms to provably solve such non-convex problems to global minima. First, using a modified inexact proximal point method, we establish global last-iterate convergence guarantees with $widetilde{mathcal{O}}(varepsilon^{-3})$ oracle complexity in non-smooth setting. For smooth problems, we propose a new bundle-level type method based on linearly constrained quadratic subproblems, improving the oracle complexity to $widetilde{mathcal{O}}(varepsilon^{-1})$. Surprisingly, despite non-convexity, our methodology does not require any constraint qualifications, can handle hidden convex equality constraints, and achieves complexities matching those for solving unconstrained hidden convex optimization.
Problem

Research questions and friction points this paper is trying to address.

Solving non-convex constrained optimization problems with hidden convexity structure
Developing algorithms using gradient oracles without explicit convex transformations
Achieving global convergence without requiring constraint qualification conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses inexact proximal point method for non-smooth problems
Proposes bundle-level method for smooth hidden convexity
Achieves global convergence without constraint qualifications
Ilyas Fatkhullin
Ilyas Fatkhullin
ETH Zurich
OptimizationReinforcement LearningStatistics
Niao He
Niao He
Associate Professor, ETH Zürich
OptimizationMachine LearningReinforcement Learning
G
Guanghui Lan
Department of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA
F
Florian Wolf
The Computing & Mathematical Sciences Department, California Institute of Technology, Pasadena, CA