🤖 AI Summary
This work addresses non-expected utility optimization problems—such as risk-sensitive decision-making and steady-state regulation—where optimizing distributional properties of returns (e.g., tail risk) is critical. We propose a distributed dynamic programming framework incorporating state augmentation, wherein historical reward statistics—specifically, the Conditional Value-at-Risk (CVaR)—are explicitly encoded as augmented state components. Leveraging distributed value and policy iteration, our method directly optimizes statistical functionals of the return distribution without assuming existence or finiteness of expectations. Theoretically, we establish convergence guarantees, derive conditions for objective optimizability, and quantify error bounds for the distributed iterative updates. Empirically, our approach significantly outperforms standard DQN across diverse risk-control and stability benchmarks, achieving superior tail-risk mitigation while preserving steady-state performance.
📝 Abstract
We introduce distributional dynamic programming (DP) methods for optimizing statistical functionals of the return distribution, with standard reinforcement learning as a special case. Previous distributional DP methods could optimize the same class of expected utilities as classic DP. To go beyond expected utilities, we combine distributional DP with stock augmentation, a technique previously introduced for classic DP in the context of risk-sensitive RL, where the MDP state is augmented with a statistic of the rewards obtained so far (since the first time step). We find that a number of recently studied problems can be formulated as stock-augmented return distribution optimization, and we show that we can use distributional DP to solve them. We analyze distributional value and policy iteration, with bounds and a study of what objectives these distributional DP methods can or cannot optimize. We describe a number of applications outlining how to use distributional DP to solve different stock-augmented return distribution optimization problems, for example maximizing conditional value-at-risk, and homeostatic regulation. To highlight the practical potential of stock-augmented return distribution optimization and distributional DP, we combine the core ideas of distributional value iteration with the deep RL agent DQN, and empirically evaluate it for solving instances of the applications discussed.