🤖 AI Summary
To address the exploration–exploitation trade-off in continuous partially observable Markov decision processes (POMDPs), this paper proposes an information-value-driven policy optimization framework. Methodologically, it formulates policy learning as probabilistic inference under a non-Markovian Feynman–Kac path integral—eliminating the need for hand-crafted exploration rewards or heuristic terms and naturally inducing information-gathering behavior. The approach integrates nested sequential Monte Carlo (SMC), history-dependent policy gradient estimation, and sampling from the optimal trajectory distribution to jointly optimize belief dynamics and long-horizon information gain. Empirically, the method achieves significant improvements over state-of-the-art baselines on standard continuous POMDP benchmarks, demonstrating superior robustness and decision efficiency—particularly under high uncertainty.
📝 Abstract
Optimal decision-making under partial observability requires agents to balance reducing uncertainty (exploration) against pursuing immediate objectives (exploitation). In this paper, we introduce a novel policy optimization framework for continuous partially observable Markov decision processes (POMDPs) that explicitly addresses this challenge. Our method casts policy learning as probabilistic inference in a non-Markovian Feynman--Kac model that inherently captures the value of information gathering by anticipating future observations, without requiring extrinsic exploration bonuses or handcrafted heuristics. To optimize policies under this model, we develop a nested sequential Monte Carlo~(SMC) algorithm that efficiently estimates a history-dependent policy gradient under samples from the optimal trajectory distribution induced by the POMDP. We demonstrate the effectiveness of our algorithm across standard continuous POMDP benchmarks, where existing methods struggle to act under uncertainty.