🤖 AI Summary
Stochastic constrained optimization remains challenging in deep neural network training, particularly for fairness- and safety-critical applications where constraints are random, nonconvex, and sampling-based.
Method: This paper introduces the first general stochastic optimization framework supporting stochastic constraints. It integrates stochastic gradient descent with Lagrangian relaxation to enable efficient modeling and solving of dynamic, nonconvex, and sampling-induced stochastic constraints. The framework implements several state-of-the-art algorithms previously unavailable as open-source (e.g., SPIDER-SG, SC-AdaGrad).
Contributions/Results: (1) An extensible, modular PyTorch toolkit with unified interfaces; (2) Empirical validation across multiple fairness learning benchmarks, demonstrating superior trade-offs between constraint satisfaction rate and model accuracy; (3) A step toward standardizing stochastic constrained optimization—bridging theoretical advances with practical deployment and establishing a new paradigm for safe AI development.
📝 Abstract
There has been a considerable interest in constrained training of deep neural networks (DNNs) recently for applications such as fairness and safety. Several toolkits have been proposed for this task, yet there is still no industry standard. We present humancompatible.train (https://github.com/humancompatible/train), an easily-extendable PyTorch-based Python package for training DNNs with stochastic constraints. We implement multiple previously unimplemented algorithms for stochastically constrained stochastic optimization. We demonstrate the toolkit use by comparing two algorithms on a deep learning task with fairness constraints.