🤖 AI Summary
Online planning for continuous POMDPs with high-dimensional continuous observations (e.g., images) suffers from prohibitive computational cost due to repeated evaluation of the original, expensive observation model, hindering real-time deployment.
Method: We propose a planning framework based on a simplified observation model, integrating particle filtering, continuous POMDP solvers, and probabilistic bound decomposition. Crucially, we derive a provable performance lower bound using the total variation (TV) distance—enabling offline construction of the simplified model while eliminating all online queries to the original observation model.
Contribution/Results: This is the first work to establish a TV-distance-based, theoretically guaranteed performance bound for simplified observation models in continuous POMDP planning. It ensures planning quality without runtime access to the original model and extends concentration theory for particle-belief MDPs. Empirical evaluation confirms seamless integration with existing online solvers and yields tight, computationally tractable performance bounds—even with zero runtime calls to the original observation model.
📝 Abstract
Solving partially observable Markov decision processes (POMDPs) with high dimensional and continuous observations, such as camera images, is required for many real life robotics and planning problems. Recent researches suggested machine learned probabilistic models as observation models, but their use is currently too computationally expensive for online deployment. We deal with the question of what would be the implication of using simplified observation models for planning, while retaining formal guarantees on the quality of the solution. Our main contribution is a novel probabilistic bound based on a statistical total variation distance of the simplified model. We show that it bounds the theoretical POMDP value w.r.t. original model, from the empirical planned value with the simplified model, by generalizing recent results of particle-belief MDP concentration bounds. Our calculations can be separated into offline and online parts, and we arrive at formal guarantees without having to access the costly model at all during planning, which is also a novel result. Finally, we demonstrate in simulation how to integrate the bound into the routine of an existing continuous online POMDP solver.