Optimal Control Theoretic Neural Optimizer: From Backpropagation to Dynamic Programming

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work formulates deep neural network training as an optimal control problem, uncovering the fundamental variational connection between backpropagation and dynamic programming. Building on this insight, we propose OCNOpt—a novel neural optimizer grounded in first-order approximations of approximate dynamic programming. OCNOpt supports inter-layer feedback, game-theoretic cooperative training, and higher-order optimization for continuous-time models (e.g., Neural ODEs). Compared to standard gradient descent, OCNOpt achieves significantly improved optimization robustness and convergence efficiency while maintaining controllable computational complexity. This study establishes, for the first time, a rigorous theoretical bridge between backpropagation and the variational principles of optimal control. It extends the dynamical systems–based paradigm for neural network optimization and opens new avenues for designing higher-order, hierarchical, and distributed training algorithms.

Technology Category

Application Category

📝 Abstract
Optimization of deep neural networks (DNNs) has been a driving force in the advancement of modern machine learning and artificial intelligence. With DNNs characterized by a prolonged sequence of nonlinear propagation, determining their optimal parameters given an objective naturally fits within the framework of Optimal Control Programming. Such an interpretation of DNNs as dynamical systems has proven crucial in offering a theoretical foundation for principled analysis from numerical equations to physics. In parallel to these theoretical pursuits, this paper focuses on an algorithmic perspective. Our motivated observation is the striking algorithmic resemblance between the Backpropagation algorithm for computing gradients in DNNs and the optimality conditions for dynamical systems, expressed through another backward process known as dynamic programming. Consolidating this connection, where Backpropagation admits a variational structure, solving an approximate dynamic programming up to the first-order expansion leads to a new class of optimization methods exploring higher-order expansions of the Bellman equation. The resulting optimizer, termed Optimal Control Theoretic Neural Optimizer (OCNOpt), enables rich algorithmic opportunities, including layer-wise feedback policies, game-theoretic applications, and higher-order training of continuous-time models such as Neural ODEs. Extensive experiments demonstrate that OCNOpt improves upon existing methods in robustness and efficiency while maintaining manageable computational complexity, paving new avenues for principled algorithmic design grounded in dynamical systems and optimal control theory.
Problem

Research questions and friction points this paper is trying to address.

Connects backpropagation to dynamic programming for neural network optimization
Develops new optimizer using higher-order Bellman equation expansions
Enables layer-wise feedback and robust training of continuous-time models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic programming replaces backpropagation for gradients
Higher-order Bellman equation expansions enable new optimizers
Layer-wise feedback policies improve robustness and efficiency
🔎 Similar Papers
No similar papers found.