🤖 AI Summary
There exists a significant gap between the theoretical convergence guarantees of deep learning optimization algorithms and their empirical performance, largely due to commonly adopted assumptions—such as Hessian boundedness—that lack empirical validation.
Method: We introduce the first trajectory-aware measurement framework tightly aligned with key theoretical quantities, systematically evaluating the validity of mainstream assumptions across diverse architectures and datasets using large-scale training runs. Our framework quantifies dynamic properties—including gradient norms, Hessian spectral characteristics, and loss curvature—along optimization trajectories.
Contribution/Results: We find that all examined theoretical assumptions fail to reliably predict actual convergence behavior and exhibit no robust correlation with optimization performance. This work uncovers a fundamental misalignment between theoretical modeling and practice, establishing the first reproducible benchmark for empirically calibrating and reconstructing optimization theory.
📝 Abstract
There is a significant gap between our theoretical understanding of optimization algorithms used in deep learning and their practical performance. Theoretical development usually focuses on proving convergence guarantees under a variety of different assumptions, which are themselves often chosen based on a rough combination of intuitive match to practice and analytical convenience. The theory/practice gap may then arise because of the failure to prove a theorem under such assumptions, or because the assumptions do not reflect reality. In this paper, we carefully measure the degree to which these assumptions are capable of explaining modern optimization algorithms by developing new empirical metrics that closely track the key quantities that must be controlled in theoretical analysis. All of our tested assumptions (including typical modern assumptions based on bounds on the Hessian) fail to reliably capture optimization performance. This highlights a need for new empirical verification of analytical assumptions used in theoretical analysis.