How Inductive Bias in Machine Learning Aligns with Optimality in Economic Dynamics

📅 2024-06-04
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Conventional shooting methods for infinite-horizon economic dynamic models rely on steady-state assumptions to impose transversality conditions, leading to numerical instability, parameter sensitivity, and difficulties in handling multiple equilibria. Method: This paper proposes a machine learning–driven, constraint-free solution framework. We theoretically establish that the minimum-norm ML solution inherently satisfies infinite-horizon boundary conditions without explicit transversality enforcement. Contribution/Results: We are the first to reveal the implicit structural soundness of black-box models—including kernel methods, deep neural networks, and differential equation–constrained learning—in solving economic dynamics. Experiments demonstrate theoretical validity and empirical superiority over classical algorithms in low- to medium-dimensional settings. Moreover, the approach significantly enhances feasibility, robustness, and interpretability for high-dimensional dynamic systems, offering a novel paradigm for high-dimensional inverse problems and embedded optimal control.

Technology Category

Application Category

📝 Abstract
This paper examines the alignment of inductive biases in machine learning (ML) with structural models of economic dynamics. Unlike dynamical systems found in physical and life sciences, economics models are often specified by differential equations with a mixture of easy-to-enforce initial conditions and hard-to-enforce infinite horizon boundary conditions (e.g. transversality and no-ponzi-scheme conditions). Traditional methods for enforcing these constraints are computationally expensive and unstable. We investigate algorithms where those infinite horizon constraints are ignored, simply training unregularized kernel machines and neural networks to obey the differential equations. Despite the inherent underspecification of this approach, our findings reveal that the inductive biases of these ML models innately enforce the infinite-horizon conditions necessary for the well-posedness. We theoretically demonstrate that (approximate or exact) min-norm ML solutions to interpolation problems are sufficient conditions for these infinite-horizon boundary conditions in a wide class of problems. We then provide empirical evidence that deep learning and ridgeless kernel methods are not only theoretically sound with respect to economic assumptions, but may even dominate classic algorithms in low to medium dimensions. More importantly, these results give confidence that, despite solving seemingly ill-posed problems, there are reasons to trust the plethora of black-box ML algorithms used by economists to solve previously intractable, high-dimensional dynamical systems -- paving the way for future work on estimation of inverse problems with embedded optimal control problems.
Problem

Research questions and friction points this paper is trying to address.

Solving infinite-horizon economic dynamics models with differential-algebraic equations
Overcoming numerical instability in traditional shooting methods for boundary conditions
Handling cases with multiple steady states through ridgeless kernel regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ridgeless kernel regression solves economic dynamics models
Minimum norm solution selects non-explosive trajectory automatically
Handles multiple steady states without direct boundary enforcement
🔎 Similar Papers
No similar papers found.