Tractable Representations for Convergent Approximation of Distributional HJB Equations

📅 2025-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Distributional reinforcement learning (DRL) in continuous-time reinforcement learning (CTRL) lacks theoretical foundations and efficient algorithms. Method: This paper introduces the first approximation framework for the distributional Hamilton–Jacobi–Bellman (DHJB) equation with provable convergence guarantees. We propose a topological consistency condition for distributional parameterization and prove that the quantile function representation satisfies it—establishing the first rigorous convergence analysis for DHJB approximation. Furthermore, we design an efficient numerical algorithm to overcome analytical intractability and computational bottlenecks in modeling return distributions in continuous time. Contribution/Results: Our work provides sufficient conditions for the solvability of the DHJB equation and lays a theoretically grounded, distributional modeling foundation for risk-sensitive continuous-time optimal control, complemented by a practical computational tool.

Technology Category

Application Category

📝 Abstract
In reinforcement learning (RL), the long-term behavior of decision-making policies is evaluated based on their average returns. Distributional RL has emerged, presenting techniques for learning return distributions, which provide additional statistics for evaluating policies, incorporating risk-sensitive considerations. When the passage of time cannot naturally be divided into discrete time increments, researchers have studied the continuous-time RL (CTRL) problem, where agent states and decisions evolve continuously. In this setting, the Hamilton-Jacobi-Bellman (HJB) equation is well established as the characterization of the expected return, and many solution methods exist. However, the study of distributional RL in the continuous-time setting is in its infancy. Recent work has established a distributional HJB (DHJB) equation, providing the first characterization of return distributions in CTRL. These equations and their solutions are intractable to solve and represent exactly, requiring novel approximation techniques. This work takes strides towards this end, establishing conditions on the method of parameterizing return distributions under which the DHJB equation can be approximately solved. Particularly, we show that under a certain topological property of the mapping between statistics learned by a distributional RL algorithm and corresponding distributions, approximation of these statistics leads to close approximations of the solution of the DHJB equation. Concretely, we demonstrate that the quantile representation common in distributional RL satisfies this topological property, certifying an efficient approximation algorithm for continuous-time distributional RL.
Problem

Research questions and friction points this paper is trying to address.

Develops tractable representations for solving distributional HJB equations.
Addresses approximation challenges in continuous-time distributional RL.
Ensures efficient approximation under specific topological properties.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameterizing return distributions for DHJB approximation
Using topological properties for efficient statistical mapping
Quantile representation ensures close DHJB equation solutions
🔎 Similar Papers
No similar papers found.
J
Julie Alhosh
School of Computer Science, McGill University, Montréal, Québec, Canada
Harley Wiltzer
Harley Wiltzer
McGill University, Mila
reinforcement learningcontrol theoryrobotics
D
D. Meger
Centre for Intelligent Machines, School of Computer Science, McGill University, Montréal, Québec, Canada