🤖 AI Summary
This work addresses the exploration–exploitation trade-off in active multi-object tracking for mobile robots, where existing diffusion-based policies lack explicit quantification of uncertainty in policy selection. The authors formulate expert selection as an offline contextual bandit problem and propose a multi-head model based on Variational Bayesian Last Layer (VBLL) to jointly predict the performance and associated uncertainty of each expert policy. By integrating these uncertainty estimates with a Lower Confidence Bound (LCB) criterion, the approach enables pessimistic yet robust action selection that avoids over-reliance on unreliable predictions. Evaluated on indoor simulated tracking tasks, the method significantly outperforms baseline diffusion strategies, mixture-of-experts models, and deterministic regression approaches.
📝 Abstract
Active multi-target tracking requires a mobile robot to balance exploration for undetected targets with exploitation of uncertain tracked ones. Diffusion policies have emerged as a powerful approach for capturing diverse behavioral strategies by learning action sequences from expert demonstrations. However, existing methods implicitly select among strategies through the denoising process, without uncertainty quantification over which strategy to execute. We formulate expert selection for diffusion policies as an offline contextual bandit problem and propose a Bayesian framework for pessimistic, uncertainty-aware strategy selection. A multi-head Variational Bayesian Last Layer (VBLL) model predicts the expected tracking performance of each expert strategy given the current belief state, providing both a point estimate and predictive uncertainty. Following the pessimism principle for offline decision-making, a Lower Confidence Bound (LCB) criterion then selects the expert whose worst-case predicted performance is best, avoiding overcommitment to experts with unreliable predictions. The selected expert conditions a diffusion policy to generate corresponding action sequences. Experiments on simulated indoor tracking scenarios demonstrate that our approach outperforms both the base diffusion policy and standard gating methods, including Mixture-of-Experts selection and deterministic regression baselines.