🤖 AI Summary
This work addresses the long-term persistent tracking of multiple dynamic targets in large-scale environments under motion uncertainty—particularly challenging when robotic agents switch targets, causing belief degradation or permanent loss of out-of-view targets. We propose a hierarchical active tracking planning framework: the upper layer formulates multi-target tracking as a Markov decision process (MDP), optimizing subtask sequences via belief evolution and success probability estimation; the lower layer integrates risk-aware coverage planning with uncertainty propagation to enable robust single-target search. Our key contribution is the first formulation of long-term tracking as a tractable sequential decision-making problem in belief space, synergistically combining tree-based search with explicit motion uncertainty modeling. Extensive simulations demonstrate that the method reduces final tracking uncertainty by 11%–70% across diverse scenarios and significantly mitigates target loss.
📝 Abstract
Achieving persistent tracking of multiple dynamic targets over a large spatial area poses significant challenges for a single-robot system with constrained sensing capabilities. As the robot moves to track different targets, the ones outside the field of view accumulate uncertainty, making them progressively harder to track. An effective path planning algorithm must manage uncertainty over a long horizon and account for the risk of permanently losing track of targets that remain unseen for too long. However, most existing approaches rely on short planning horizons and assume small, bounded environments, resulting in poor tracking performance and target loss in large-scale scenarios. In this paper, we present a hierarchical planner for tracking multiple moving targets with an aerial vehicle. To address the challenge of tracking non-static targets, our method incorporates motion models and uncertainty propagation during path execution, allowing for more informed decision-making. We decompose the multi-target tracking task into sub-tasks of single target search and detection, and our proposed pipeline consists a novel low-level coverage planner that enables searching for a target in an evolving belief area, and an estimation method to assess the likelihood of success for each sub-task, making it possible to convert the active target tracking task to a Markov decision process (MDP) that we solve with a tree-based algorithm to determine the sequence of sub-tasks. We validate our approach in simulation, demonstrating its effectiveness compared to existing planners for active target tracking tasks, and our proposed planner outperforms existing approaches, achieving a reduction of 11-70% in final uncertainty across different environments.