AL-CoLe: Augmented Lagrangian for Constrained Learning

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in constrained non-convex machine learning—namely, large duality gaps and the failure of strong duality—by proposing a dual ascent algorithm grounded in the augmented Lagrangian framework. Under a relaxed constraint qualification weaker than the classical Slater condition, we establish, for the first time, strong duality for non-convex constrained learning problems and prove global convergence of the algorithm to a primal optimal solution. We further derive a PAC-style generalization error bound. The method requires only minimal modifications to jointly optimize objective and constraints, achieving significant improvements in constraint satisfaction and model performance on fairness-aware classification tasks, while ensuring both feasibility and optimality. This work introduces a new paradigm for non-convex constrained learning that unifies theoretical guarantees—including strong duality, global convergence, and generalization bounds—with practical efficacy.

Technology Category

Application Category

📝 Abstract
Despite the non-convexity of most modern machine learning parameterizations, Lagrangian duality has become a popular tool for addressing constrained learning problems. We revisit Augmented Lagrangian methods, which aim to mitigate the duality gap in non-convex settings while requiring only minimal modifications, and have remained comparably unexplored in constrained learning settings. We establish strong duality results under mild conditions, prove convergence of dual ascent algorithms to feasible and optimal primal solutions, and provide PAC-style generalization guarantees. Finally, we demonstrate its effectiveness on fairness constrained classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Addressing constrained learning problems using Augmented Lagrangian methods
Establishing strong duality and convergence in non-convex settings
Demonstrating effectiveness on fairness constrained classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augmented Lagrangian methods for constrained learning
Strong duality under mild non-convex conditions
Convergence guarantees for dual ascent algorithms
🔎 Similar Papers
No similar papers found.