GammaZero: Learning To Guide POMDP Belief Space Search With Graph Representations

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing POMDP planning methods rely on domain-specific neural architectures, suffer from poor scalability, and lack cross-scale generalization capability. Method: This paper proposes an action-centric graph representation framework that, for the first time, unifies belief states as structured graphs; it employs a joint graph neural network–decoder architecture to learn heuristic policies from expert demonstrations and guide Monte Carlo tree search. Contribution/Results: The approach enables zero-shot transfer of learned policies and value functions to new problem instances 2–4× larger in scale. On standard POMDP benchmarks, it matches BetaZero’s performance while significantly reducing search overhead. Crucially, it breaks the conventional dependence on training-scale alignment, achieving genuine cross-scale zero-shot generalization—without retraining or fine-tuning—thereby advancing scalability and adaptability in POMDP planning.

Technology Category

Application Category

📝 Abstract
We introduce an action-centric graph representation framework for learning to guide planning in Partially Observable Markov Decision Processes (POMDPs). Unlike existing approaches that require domain-specific neural architectures and struggle with scalability, GammaZero leverages a unified graph-based belief representation that enables generalization across problem sizes within a domain. Our key insight is that belief states can be systematically transformed into action-centric graphs where structural patterns learned on small problems transfer to larger instances. We employ a graph neural network with a decoder architecture to learn value functions and policies from expert demonstrations on computationally tractable problems, then apply these learned heuristics to guide Monte Carlo tree search on larger problems. Experimental results on standard POMDP benchmarks demonstrate that GammaZero achieves comparable performance to BetaZero when trained and tested on the same-sized problems, while uniquely enabling zero-shot generalization to problems 2-4 times larger than those seen during training, maintaining solution quality with reduced search requirements.
Problem

Research questions and friction points this paper is trying to address.

Learning scalable POMDP planning with graph representations
Enabling zero-shot generalization to larger problem instances
Guiding belief space search using learned graph neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action-centric graph representation for POMDP planning
Graph neural network learns policies from expert demonstrations
Zero-shot generalization to larger problems with reduced search
🔎 Similar Papers
No similar papers found.