🤖 AI Summary
A foundational assumption in theoretical analyses of deep neural networks—namely, that input distributions can be treated as Gaussian—remains unverified, particularly for structured non-Gaussian inputs such as Gaussian mixtures. Method: The authors develop a continuous-time stochastic differential equation model of SGD dynamics, complemented by rigorous theoretical analysis and large-scale empirical validation across diverse structured input distributions. Contribution/Results: They demonstrate, for the first time, that after appropriate input standardization, parameter evolution under non-Gaussian structured inputs closely matches that under Gaussian inputs—revealing strong universality. Based on this insight, they propose a novel standardization scheme and construct the first unified theoretical framework bridging the gap between idealized Gaussian assumptions and real-world data distributions. This significantly enhances both the explanatory power and predictive accuracy of existing theoretical models in practical settings.
📝 Abstract
Bridging the gap between the practical performance of deep learning and its theoretical foundations often involves analyzing neural networks through stochastic gradient descent (SGD). Expanding on previous research that focused on modeling structured inputs under a simple Gaussian setting, we analyze the behavior of a deep learning system trained on inputs modeled as Gaussian mixtures to better simulate more general structured inputs. Through empirical analysis and theoretical investigation, we demonstrate that under certain standardization schemes, the deep learning model converges toward Gaussian setting behavior, even when the input data follow more complex or real-world distributions. This finding exhibits a form of universality in which diverse structured distributions yield results consistent with Gaussian assumptions, which can support the theoretical understanding of deep learning models.