đ€ AI Summary
Efficiently solving constrained optimization problemsâsuch as those enforcing fairness, robustness, or physical consistencyâin deep learning remains challenging. This paper introduces Cooper, the first open-source deep learning framework systematically designed for mini-batch stochastic gradient estimation under non-convex, continuous constraints. Cooper tightly integrates the Lagrangian multiplier method with first-order optimization algorithms, natively supports PyTorchâs automatic differentiation, and provides constraint-aware optimizers alongside specialized neural network architecture adapters. Compared to existing approaches, Cooper significantly simplifies constraint specification and training workflow without compromising convergence or stability. Empirical evaluation across multiple benchmark tasksâincluding fair classification, adversarially robust training, and physics-informed neural networksâdemonstrates Cooperâs effectiveness, generalizability, and scalability. By unifying modeling flexibility with algorithmic rigor, Cooper establishes a practical, extensible infrastructure for developing trustworthy AI systems.
đ Abstract
Cooper is an open-source package for solving constrained optimization problems involving deep learning models. Cooper implements several Lagrangian-based first-order update schemes, making it easy to combine constrained optimization algorithms with high-level features of PyTorch such as automatic differentiation, and specialized deep learning architectures and optimizers. Although Cooper is specifically designed for deep learning applications where gradients are estimated based on mini-batches, it is suitable for general non-convex continuous constrained optimization. Cooper's source code is available at https://github.com/cooper-org/cooper.