Efficient Solution and Learning of Robust Factored MDPs

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For robust Markov decision processes (r-MDPs) in unknown environments, how can we exploit factorized state-space structure—reflecting independence among system components—to improve sample efficiency and obtain strong theoretical performance guarantees for robust policy learning? Method: We reformulate the non-convex factorized r-MDP optimization into a tractable linear program and propose a PAC-style algorithm that jointly learns both the factorized model structure and a robust policy from interaction data. Our approach incorporates uncertainty decoupling and robust dynamic programming to ensure worst-case performance under adversarial transition uncertainties. Contributions/Results: Experiments demonstrate an order-of-magnitude reduction in sample complexity compared to baselines. The learned policies achieve superior empirical performance, and our theoretical robustness guarantee is significantly tighter than those of current state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Robust Markov decision processes (r-MDPs) extend MDPs by explicitly modelling epistemic uncertainty about transition dynamics. Learning r-MDPs from interactions with an unknown environment enables the synthesis of robust policies with provable (PAC) guarantees on performance, but this can require a large number of sample interactions. We propose novel methods for solving and learning r-MDPs based on factored state-space representations that leverage the independence between model uncertainty across system components. Although policy synthesis for factored r-MDPs leads to hard, non-convex optimisation problems, we show how to reformulate these into tractable linear programs. Building on these, we also propose methods to learn factored model representations directly. Our experimental results show that exploiting factored structure can yield dimensional gains in sample efficiency, producing more effective robust policies with tighter performance guarantees than state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Learning robust MDPs with factored state-space representations
Reformulating non-convex problems into tractable linear programs
Improving sample efficiency for robust policy synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Factored state-space representations for r-MDPs
Reformulate non-convex problems into linear programs
Learn factored models directly for sample efficiency
🔎 Similar Papers
No similar papers found.