Bellman Optimality of Average-Reward Robust Markov Decision Processes with a Constant Gain

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Average-reward robust Markov decision processes (MDPs) have long lacked a unified theoretical framework. Method: This paper establishes, for the first time, a dynamic programming foundation under constant-gain scaling, focusing on settings with information asymmetry between the controller and an S-rectangular adversary. We formulate and rigorously analyze the constant-gain Bellman equation for average-reward robust MDPs, deriving sufficient conditions for the existence of solutions and their equivalence to optimal average reward. We further characterize the structure of optimal policies and their steady-state behavior. Contribution/Results: The work fills fundamental theoretical gaps in average-reward robust MDPs—specifically regarding solution existence, optimality characterization, and policy structure—thereby providing the first rigorous, computationally tractable robust control theory for long-run operational optimization scenarios.

Technology Category

Application Category

📝 Abstract
Learning and optimal control under robust Markov decision processes (MDPs) have received increasing attention, yet most existing theory, algorithms, and applications focus on finite-horizon or discounted models. The average-reward formulation, while natural in many operations research and management contexts, remains underexplored. This is primarily because the dynamic programming foundations are technically challenging and only partially understood, with several fundamental questions remaining open. This paper steps toward a general framework for average-reward robust MDPs by analyzing the constant-gain setting. We study the average-reward robust control problem with possible information asymmetries between the controller and an S-rectangular adversary. Our analysis centers on the constant-gain robust Bellman equation, examining both the existence of solutions and their relationship to the optimal average reward. Specifically, we identify when solutions to the robust Bellman equation characterize the optimal average reward and stationary policies, and we provide sufficient conditions ensuring solutions' existence. These findings expand the dynamic programming theory for average-reward robust MDPs and lay a foundation for robust dynamic decision making under long-run average criteria in operational environments.
Problem

Research questions and friction points this paper is trying to address.

Analyzing constant-gain average-reward robust MDPs
Establishing Bellman optimality conditions for robust control
Addressing information asymmetry between controller and adversary
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constant-gain robust Bellman equation analysis
Existence conditions for optimal average reward
Stationary policy characterization under information asymmetry
🔎 Similar Papers
No similar papers found.
S
Shengbo Wang
Daniel J. Epstein Department of Industrial and Systems Engineering, University of Southern California
Nian Si
Nian Si
Hong Kong University of Science and Technology
Applied ProbabilityExperimental DesignCausal Inference