The Power of Random Features and the Limits of Distribution-Free Gradient Descent

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the learnability limits of gradient-based optimization (e.g., mini-batch SGD) and random-feature linear models under *distribution-free* assumptions—i.e., without any data distributional hypotheses. We introduce the *Average Distributional Complexity* (ADC), a novel complexity measure, and establish its polynomial equivalence to Statistical Query (SQ) dimension. Crucially, we achieve the first *infinite separation* between ADC and classical complexity measures—including VC and Rademacher dimensions—demonstrating that ADC captures fundamentally distinct aspects of distribution-free learnability. Theoretically, we prove: (i) if mini-batch SGD can learn a parametric model distribution-freely, then the model must be approximable by a polynomial-size random-feature combination; and (ii) conversely, random-feature models fully simulate the distribution-free learning power of SGD. These results precisely characterize the feasibility frontier of distribution-free learning and provide the first tight theoretical explanation for why neural network training inherently relies on distributional assumptions.

Technology Category

Application Category

📝 Abstract
We study the relationship between gradient-based optimization of parametric models (e.g., neural networks) and optimization of linear combinations of random features. Our main result shows that if a parametric model can be learned using mini-batch stochastic gradient descent (bSGD) without making assumptions about the data distribution, then with high probability, the target function can also be approximated using a polynomial-sized combination of random features. The size of this combination depends on the number of gradient steps and numerical precision used in the bSGD process. This finding reveals fundamental limitations of distribution-free learning in neural networks trained by gradient descent, highlighting why making assumptions about data distributions is often crucial in practice. Along the way, we also introduce a new theoretical framework called average probabilistic dimension complexity (adc), which extends the probabilistic dimension complexity developed by Kamath et al. (2020). We prove that adc has a polynomial relationship with statistical query dimension, and use this relationship to demonstrate an infinite separation between adc and standard dimension complexity.
Problem

Research questions and friction points this paper is trying to address.

Analyzing gradient descent's limits in distribution-free learning
Linking random feature approximation to parametric model training
Introducing adc framework to measure learning complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses random features for function approximation
Links gradient descent to feature combinations
Introduces average probabilistic dimension complexity
🔎 Similar Papers
No similar papers found.