Cooper: A Library for Constrained Optimization in Deep Learning

📅 2025-04-01
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Efficiently solving constrained optimization problems—such as those enforcing fairness, robustness, or physical consistency—in deep learning remains challenging. This paper introduces Cooper, the first open-source deep learning framework systematically designed for mini-batch stochastic gradient estimation under non-convex, continuous constraints. Cooper tightly integrates the Lagrangian multiplier method with first-order optimization algorithms, natively supports PyTorch’s automatic differentiation, and provides constraint-aware optimizers alongside specialized neural network architecture adapters. Compared to existing approaches, Cooper significantly simplifies constraint specification and training workflow without compromising convergence or stability. Empirical evaluation across multiple benchmark tasks—including fair classification, adversarially robust training, and physics-informed neural networks—demonstrates Cooper’s effectiveness, generalizability, and scalability. By unifying modeling flexibility with algorithmic rigor, Cooper establishes a practical, extensible infrastructure for developing trustworthy AI systems.

Technology Category

Application Category

📝 Abstract
Cooper is an open-source package for solving constrained optimization problems involving deep learning models. Cooper implements several Lagrangian-based first-order update schemes, making it easy to combine constrained optimization algorithms with high-level features of PyTorch such as automatic differentiation, and specialized deep learning architectures and optimizers. Although Cooper is specifically designed for deep learning applications where gradients are estimated based on mini-batches, it is suitable for general non-convex continuous constrained optimization. Cooper's source code is available at https://github.com/cooper-org/cooper.
Problem

Research questions and friction points this paper is trying to address.

Solving constrained optimization in deep learning
Combining Lagrangian methods with PyTorch features
Handling non-convex continuous constrained optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lagrangian-based first-order update schemes
Integration with PyTorch automatic differentiation
General non-convex continuous constrained optimization
🔎 Similar Papers