🤖 AI Summary
This work investigates the evolutionary mechanism of generalization gaps during gradient descent training of deep neural networks, aiming to uncover the intrinsic interplay between dataset properties and network architecture in determining generalization performance. We propose a differential equation model for generalization error evolution, introducing contraction and perturbation factors, and formally define the “effective Gram matrix” for the first time. We prove that its minimal eigenspace governs residual dynamics, and quantitatively characterize the generalization gap via alignment between this matrix and the initial residual. Theoretically, we show that generalization depends critically on data–architecture compatibility and that training exhibits benign structure: the residual remains confined to the minimal eigenspace throughout optimization. Empirically, our framework accurately predicts test loss on image classification tasks. Notably, under zero initialization, the generalization gap does not significantly deteriorate—providing the first quantifiable criterion for co-designing architectures and datasets.
📝 Abstract
We derive a differential equation that governs the evolution of the generalization gap when a deep network is trained by gradient descent. This differential equation is controlled by two quantities, a contraction factor that brings together trajectories corresponding to slightly different datasets, and a perturbation factor that accounts for them training on different datasets. We analyze this differential equation to compute an ``effective Gram matrix'' that characterizes the generalization gap after training in terms of the alignment between this Gram matrix and a certain initial ``residual''. Empirical evaluations on image classification datasets indicate that this analysis can predict the test loss accurately. Further, at any point during training, the residual predominantly lies in the subspace of the effective Gram matrix with the smallest eigenvalues. This indicates that the training process is benign, i.e., it does not lead to significant deterioration of the generalization gap (which is zero at initialization). The alignment between the effective Gram matrix and the residual is different for different datasets and architectures. The match/mismatch of the data and the architecture is primarily responsible for good/bad generalization.