🤖 AI Summary
This work addresses the challenge of integrating hard constraints in deep learning, where conventional orthogonal projections onto constraint sets often induce gradient saturation and impede optimization. To overcome this limitation, the authors propose a differentiable soft radial projection layer that maps inputs from Euclidean space radially into the interior of the feasible set, thereby guaranteeing strict feasibility while avoiding vanishing gradients. The method features a Jacobian matrix that is full-rank almost everywhere, effectively circumventing the gradient degeneracy associated with traditional boundary-based projections, and preserves the universal approximation capability of neural networks. By combining differentiable reparameterization with end-to-end training, the proposed approach consistently outperforms state-of-the-art optimization and projection baselines in both convergence speed and solution quality.
📝 Abstract
Integrating hard constraints into deep learning is essential for safety-critical systems. Yet existing constructive layers that project predictions onto constraint boundaries face a fundamental bottleneck: gradient saturation. By collapsing exterior points onto lower-dimensional surfaces, standard orthogonal projections induce rank-deficient Jacobians, which nullify gradients orthogonal to active constraints and hinder optimization. We introduce Soft-Radial Projection, a differentiable reparameterization layer that circumvents this issue through a radial mapping from Euclidean space into the interior of the feasible set. This construction guarantees strict feasibility while preserving a full-rank Jacobian almost everywhere, thereby preventing the optimization stalls typical of boundary-based methods. We theoretically prove that the architecture retains the universal approximation property and empirically show improved convergence behavior and solution quality over state-of-the-art optimization- and projection-based baselines.