Improving Neural Network Training using Dynamic Learning Rate Schedule for PINNs and Image Classification

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static learning rates in neural network training struggle to adapt to complex, time-varying gradient dynamics, often resulting in slow convergence and training instability. To address this, we propose a Dynamic Learning Rate Scheduler (DLRS), a feedback-driven algorithm that adjusts the learning rate in real time based on the curvature and descent trend of the training loss—without relying on predefined decay schedules or additional hyperparameters. DLRS is architecture-agnostic and seamlessly integrates with diverse models, including Physics-Informed Neural Networks (PINNs), Multilayer Perceptrons (MLPs), and Convolutional Neural Networks (CNNs). Experiments demonstrate that DLRS accelerates training convergence by an average factor of 1.8×, improves optimization robustness, and achieves higher accuracy with more stable training trajectories across tasks: solving partial differential equations with PINNs and image classification on CIFAR-10/100. These results validate DLRS’s generality, effectiveness, and practical utility.

Technology Category

Application Category

📝 Abstract
Training neural networks can be challenging, especially as the complexity of the problem increases. Despite using wider or deeper networks, training them can be a tedious process, especially if a wrong choice of the hyperparameter is made. The learning rate is one of such crucial hyperparameters, which is usually kept static during the training process. Learning dynamics in complex systems often requires a more adaptive approach to the learning rate. This adaptability becomes crucial to effectively navigate varying gradients and optimize the learning process during the training process. In this paper, a dynamic learning rate scheduler (DLRS) algorithm is presented that adapts the learning rate based on the loss values calculated during the training process. Experiments are conducted on problems related to physics-informed neural networks (PINNs) and image classification using multilayer perceptrons and convolutional neural networks, respectively. The results demonstrate that the proposed DLRS accelerates training and improves stability.
Problem

Research questions and friction points this paper is trying to address.

Dynamic learning rate adaptation for neural networks
Improving training efficiency in PINNs and image classification
Enhancing stability and speed in complex network training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic learning rate scheduler adapts to loss values
Applied to PINNs and image classification tasks
Improves training speed and stability
🔎 Similar Papers
No similar papers found.