Neural Velocity for hyperparameter tuning

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hyperparameter tuning strategies—such as learning rate scheduling and early stopping—rely heavily on validation loss, incurring substantial computational overhead and risking data leakage. To address this, we propose NeVe, a validation-free dynamic hyperparameter control framework grounded in Neural Velocity—the sensitivity of neuron activations to input perturbations. NeVe estimates this metric via lightweight noise injection during forward passes, requiring neither labels nor validation data. Crucially, it treats Neural Velocity as an intrinsic indicator of model convergence, enabling adaptive learning rate adjustment and training termination without external supervision. Experiments across diverse architectures and tasks demonstrate that NeVe significantly reduces reliance on validation sets while maintaining or improving convergence stability and final performance. By decoupling hyperparameter adaptation from validation metrics, NeVe establishes a novel paradigm for efficient, robust, unsupervised hyperparameter optimization.

Technology Category

Application Category

📝 Abstract
Hyperparameter tuning, such as learning rate decay and defining a stopping criterion, often relies on monitoring the validation loss. This paper presents NeVe, a dynamic training approach that adjusts the learning rate and defines the stop criterion based on the novel notion of "neural velocity". The neural velocity measures the rate of change of each neuron's transfer function and is an indicator of model convergence: sampling neural velocity can be performed even by forwarding noise in the network, reducing the need for a held-out dataset. Our findings show the potential of neural velocity as a key metric for optimizing neural network training efficiently
Problem

Research questions and friction points this paper is trying to address.

Optimizing learning rate and stop criterion dynamically
Measuring neural velocity for model convergence
Reducing dependency on held-out validation datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic training with neural velocity concept
Learning rate adjustment via neural velocity
Stop criterion based on neuron change rate
🔎 Similar Papers
No similar papers found.