GRADSTOP: Early Stopping of Gradient Descent via Posterior Sampling

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency of validation-set-based early stopping in gradient descent—which wastes training data and limits generalization under small-sample regimes—this paper proposes a validation-free stochastic early stopping method. Our approach, Gradient-based Posterior Sampling (GPS), approximates the Bayesian posterior distribution over model parameters using gradient statistics collected during training, and models the stopping time as a random draw from this posterior. This constitutes the first fully gradient-driven, validation-split-free, posterior-guided early stopping framework. Experiments across standard benchmarks and low-data regimes—including transfer learning—demonstrate that GPS consistently outperforms conventional validation-based early stopping: it achieves lower test loss, improved generalization, negligible computational overhead, and seamless integration into existing optimization pipelines.

Technology Category

Application Category

📝 Abstract
Machine learning models are often learned by minimising a loss function on the training data using a gradient descent algorithm. These models often suffer from overfitting, leading to a decline in predictive performance on unseen data. A standard solution is early stopping using a hold-out validation set, which halts the minimisation when the validation loss stops decreasing. However, this hold-out set reduces the data available for training. This paper presents {sc gradstop}, a novel stochastic early stopping method that only uses information in the gradients, which are produced by the gradient descent algorithm ``for free.'' Our main contributions are that we estimate the Bayesian posterior by the gradient information, define the early stopping problem as drawing sample from this posterior, and use the approximated posterior to obtain a stopping criterion. Our empirical evaluation shows that {sc gradstop} achieves a small loss on test data and compares favourably to a validation-set-based stopping criterion. By leveraging the entire dataset for training, our method is particularly advantageous in data-limited settings, such as transfer learning. It can be incorporated as an optional feature in gradient descent libraries with only a small computational overhead. The source code is available at https://github.com/edahelsinki/gradstop.
Problem

Research questions and friction points this paper is trying to address.

Prevents overfitting without validation data reduction
Uses gradient information for Bayesian posterior estimation
Provides early stopping criterion via posterior sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian posterior estimation via gradient information
Stopping criterion from approximated posterior sampling
Eliminates need for hold-out validation sets
🔎 Similar Papers
No similar papers found.