🤖 AI Summary
This work addresses the challenge of preserving geometric constraints when training neural networks on manifolds with boundary. We propose a class of geometry-aware neural architectures that interpret network layers as discretizations of projected dynamical systems on manifolds, explicitly enforcing manifold constraints through geometric update operations such as exponential maps and projections. A universal approximation theorem is established for constrained neural ODEs, distinguishing between output-level and layer-wise constraint mechanisms. Furthermore, we introduce a data-driven projection method grounded in the heat kernel limit. Evaluated on tasks including dynamics modeling on $S^2$ and $\mathrm{SO}(3)$, as well as feature diffusion on $S^{d-1}$, our approach achieves high-precision geometric preservation—using either analytical or learned projections—and significantly outperforms existing methods.
📝 Abstract
Preserving geometric structure is important in learning. We propose a unified class of geometry-aware architectures that interleave geometric updates between layers, where both projection layers and intrinsic exponential map updates arise as discretizations of projected dynamical systems on manifolds (with or without boundary). Within this framework, we establish universal approximation results for constrained neural ODEs. We also analyze architectures that enforce geometry only at the output, proving a separate universal approximation property that enables direct comparison to interleaved designs. When the constraint set is unknown, we learn projections via small-time heat-kernel limits, showing diffusion/flow-matching can be used as data-based projections. Experiments on dynamics over S^2 and SO(3), and diffusion on S^{d-1}-valued features demonstrate exact feasibility for analytic updates and strong performance for learned projections