Concentration Inequalities for the Stochastic Optimization of Unbounded Objectives with Application to Denoising Score Matching

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses statistical error control in stochastic optimization under unbounded objective functions—particularly relevant to denoising score matching (DSM), where the objective may be unbounded even when the data distribution has bounded support. Methodologically, we develop the first generalization theory framework for unbounded functions by introducing a sample-dependent McDiarmid-type inequality and deriving a tight upper bound on the Rademacher complexity for locally Lipschitz function classes, thereby unifying law-of-large-numbers guarantees with sharp statistical error bounds. Theoretical contributions include: (1) the first rigorous $O(1/sqrt{n})$ statistical error bound for DSM; and (2) a quantitative characterization of how Gaussian auxiliary variable resampling improves estimation accuracy. Our results break the conventional boundedness assumption, significantly broadening the theoretical foundations of score-based generative modeling and related stochastic optimization methods.

Technology Category

Application Category

📝 Abstract
We derive novel concentration inequalities that bound the statistical error for a large class of stochastic optimization problems, focusing on the case of unbounded objective functions. Our derivations utilize the following tools: 1) A new form of McDiarmid's inequality that is based on sample dependent one component difference bounds and which leads to a novel uniform law of large numbers result for unbounded functions. 2) A Rademacher complexity bound for families of functions that satisfy an appropriate local Lipschitz property. As an application of these results, we derive statistical error bounds for denoising score matching (DSM), an application that inherently requires one to consider unbounded objective functions, even when the data distribution has bounded support. In addition, our results establish the benefit of sample reuse in algorithms that employ easily sampled auxiliary random variables in addition to the training data, e.g., as in DSM, which uses auxiliary Gaussian random variables.
Problem

Research questions and friction points this paper is trying to address.

Concentration inequalities for unbounded objectives
Statistical error bounds in stochastic optimization
Application to denoising score matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel McDiarmid's inequality
Rademacher complexity bound
Sample reuse in DSM
🔎 Similar Papers
No similar papers found.