🤖 AI Summary
This work addresses the lack of provably sound hyperparameter tuning methods for gradient descent applied to non-convex, non-smooth objective functions—such as neural networks with ReLU, sigmoid, or tanh activations—where learning rate, momentum, step-size scheduling, and initialization critically affect convergence. We propose the first joint automated hyperparameter tuning framework that requires neither convexity nor smoothness assumptions. Our method leverages validation loss–guided iterative control coupled with rigorous sample complexity analysis to simultaneously optimize learning rate schedules, momentum coefficients, and pretrained initializations, while guaranteeing convergence. Theoretically, its sample complexity degrades from the optimal rate for convex smooth optimization by only a logarithmic factor. Empirically, the framework demonstrates strong performance and improved generalization across standard neural network training benchmarks.
📝 Abstract
Gradient-based iterative optimization methods are the workhorse of modern machine learning. They crucially rely on careful tuning of parameters like learning rate and momentum. However, one typically sets them using heuristic approaches without formal near-optimality guarantees. Recent work by Gupta and Roughgarden studies how to learn a good step-size in gradient descent. However, like most of the literature with theoretical guarantees for gradient-based optimization, their results rely on strong assumptions on the function class including convexity and smoothness which do not hold in typical applications. In this work, we develop novel analytical tools for provably tuning hyperparameters in gradient-based algorithms that apply to non-convex and non-smooth functions. We obtain matching sample complexity bounds for learning the step-size in gradient descent shown for smooth, convex functions in prior work (up to logarithmic factors) but for a much broader class of functions. Our analysis applies to gradient descent on neural networks with commonly used activation functions (including ReLU, sigmoid and tanh). We extend our framework to tuning multiple hyperparameters, including tuning the learning rate schedule, simultaneously tuning momentum and step-size, and pre-training the initialization vector. Our approach can be used to bound the sample complexity for minimizing both the validation loss as well as the number of gradient descent iterations.