🤖 AI Summary
Conventional artificial neural networks suffer from limited generalization due to regularization grounded in discrete (zero-dimensional) samples. Method: This paper introduces the Deep Sturm–Liouville (DSL) framework—the first to embed Sturm–Liouville theory into deep learning—enabling one-dimensional, function-level regularization along continuous field lines in the input space. DSL jointly learns a differentiable vector field and neural-parameterized orthogonal basis functions, achieving geometry-aware implicit continuous regularization via implicit differentiation, rank-1 parabolic eigenvalue modeling, and field-line integration. Contribution/Results: Evaluated on benchmarks including MNIST and CIFAR-10, DSL significantly improves sample efficiency and generalization performance, attaining state-of-the-art results.
📝 Abstract
Although Artificial Neural Networks (ANNs) have achieved remarkable success across various tasks, they still suffer from limited generalization. We hypothesize that this limitation arises from the traditional sample-based (0--dimensionnal) regularization used in ANNs. To overcome this, we introduce extit{Deep Sturm--Liouville} (DSL), a novel function approximator that enables continuous 1D regularization along field lines in the input space by integrating the Sturm--Liouville Theorem (SLT) into the deep learning framework. DSL defines field lines traversing the input space, along which a Sturm--Liouville problem is solved to generate orthogonal basis functions, enforcing implicit regularization thanks to the desirable properties of SLT. These basis functions are linearly combined to construct the DSL approximator. Both the vector field and basis functions are parameterized by neural networks and learned jointly. We demonstrate that the DSL formulation naturally arises when solving a Rank-1 Parabolic Eigenvalue Problem. DSL is trained efficiently using stochastic gradient descent via implicit differentiation. DSL achieves competitive performance and demonstrate improved sample efficiency on diverse multivariate datasets including high-dimensional image datasets such as MNIST and CIFAR-10.