🤖 AI Summary
This paper addresses the computational challenge of solving equilibrium price dynamics in heterogeneous-agent models with aggregate macroeconomic risk—particularly where nontrivial market-clearing conditions render conventional approaches (e.g., Krusell–Smith dimensionality reduction or the master equation) inapplicable. We propose a Structured Reinforcement Learning (SRL) framework that treats low-dimensional prices as state variables, bypassing high-dimensional distributional representations. SRL jointly trains individual optimization and equilibrium constraints while embedding structural priors to preserve microfoundations. Crucially, it circumvents the master equation entirely, enabling global, efficient, and high-accuracy equilibrium computation. The method achieves minute-scale convergence in benchmark models—including the Krusell–Smith economy, the Huggett model with aggregate shocks, and the HANK model featuring a forward-looking Phillips curve—while satisfying stringent steady-state accuracy requirements.
📝 Abstract
We present a new approach to formulating and solving heterogeneous agent models with aggregate risk. We replace the cross-sectional distribution with low-dimensional prices as state variables and let agents learn equilibrium price dynamics directly from simulated paths. To do so, we introduce a structural reinforcement learning (SRL) method which treats prices via simulation while exploiting agents' structural knowledge of their own individual dynamics. Our SRL method yields a general and highly efficient global solution method for heterogeneous agent models that sidesteps the Master equation and handles problems traditional methods struggle with, in particular nontrivial market-clearing conditions. We illustrate the approach in the Krusell-Smith model, the Huggett model with aggregate shocks, and a HANK model with a forward-looking Phillips curve, all of which we solve globally within minutes.