Observation Adaptation via Annealed Importance Resampling for Partially Observable Markov Decision Processes

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address particle degeneracy and sample impoverishment in online POMDP solving induced by high-information observations, this paper proposes an annealed importance resampling (AIR)-based belief update method. The core innovation lies in learning a sequence of bridge distributions that progressively approximate the optimal posterior via iterative Monte Carlo steps, thereby mitigating the failure of conventional particle filters under model-belief mismatch. By seamlessly integrating annealed importance sampling with online Bayesian belief updating, the method achieves significant improvements over existing state-of-the-art approaches across multiple standard POMDP benchmarks: belief estimation error is reduced by 23%–41%, and expected policy return increases by 18%–35%.

Technology Category

Application Category

📝 Abstract
Partially observable Markov decision processes (POMDPs) are a general mathematical model for sequential decision-making in stochastic environments under state uncertainty. POMDPs are often solved extit{online}, which enables the algorithm to adapt to new information in real time. Online solvers typically use bootstrap particle filters based on importance resampling for updating the belief distribution. Since directly sampling from the ideal state distribution given the latest observation and previous state is infeasible, particle filters approximate the posterior belief distribution by propagating states and adjusting weights through prediction and resampling steps. However, in practice, the importance resampling technique often leads to particle degeneracy and sample impoverishment when the state transition model poorly aligns with the posterior belief distribution, especially when the received observation is highly informative. We propose an approach that constructs a sequence of bridge distributions between the state-transition and optimal distributions through iterative Monte Carlo steps, better accommodating noisy observations in online POMDP solvers. Our algorithm demonstrates significantly superior performance compared to state-of-the-art methods when evaluated across multiple challenging POMDP domains.
Problem

Research questions and friction points this paper is trying to address.

Addresses particle degeneracy in POMDP online solvers
Improves belief updates with noisy observations
Enhances state-transition and posterior distribution alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Annealed importance resampling for POMDPs
Bridge distributions via Monte Carlo
Improved online belief updates
🔎 Similar Papers
No similar papers found.