🤖 AI Summary
This work studies the simultaneous approximation of score functions and their arbitrary-order derivatives by deep neural networks, focusing on low-dimensional structured probability distributions with unbounded support. Addressing limitations of conventional approaches—which rely on bounded support assumptions and struggle with high-order derivatives—we establish, for the first time, a dimension-free uniform approximation theory for high-order derivatives, removing the bounded-support requirement. Methodologically, we integrate the nonlinear approximation capacity of deep ReLU networks, regularity analysis of density functions, and embedding theorems for high-order Sobolev spaces. Theoretically, we prove that the resulting approximation error achieves the current optimal rate, with convergence strictly independent of input dimension. This provides a rigorous theoretical foundation—under unbounded support—for score-based generative modeling and Bayesian inference.
📝 Abstract
We present a theory for simultaneous approximation of the score function and its derivatives, enabling the handling of data distributions with low-dimensional structure and unbounded support. Our approximation error bounds match those in the literature while relying on assumptions that relax the usual bounded support requirement. Crucially, our bounds are free from the curse of dimensionality. Moreover, we establish approximation guarantees for derivatives of any prescribed order, extending beyond the commonly considered first-order setting.