🤖 AI Summary
This work addresses the limitation of existing kernel-based ergodic control methods, which suffer from high computational complexity and are thus restricted to finite time horizons, rendering them unsuitable for long-term coverage tasks. For the first time, the authors extend ergodic control to the infinite time horizon by introducing a novel controller based on kernel mean embeddings. The approach decouples historical state visitation from future control actions through a recursively updated extended state-visit error variable and leverages a receding-horizon scheme for efficient optimization. The proposed method is theoretically guaranteed to achieve asymptotic convergence and demonstrates superior long-term ergodic performance in both two-dimensional and three-dimensional coverage tasks.
📝 Abstract
This paper derives an infinite-horizon ergodic controller based on kernel mean embeddings for long-duration coverage tasks on general domains. While existing kernel-based ergodic control methods provide strong coverage guarantees on general coverage domains, their practical use has been limited to sub-ergodic, finite-time horizons due to intractable computational scaling, prohibiting its use for long-duration coverage. We resolve this scaling by deriving an infinite-horizon ergodic controller equipped with an extended kernel mean embedding error visitation state that recursively records state visitation. This extended state decouples past visitation from future control synthesis and expands ergodic control to infinite-time settings. In addition, we present a variation of the controller that operates on a receding-horizon control formulation with the extended error state. We demonstrate theoretical proof of asymptotic convergence of the derived controller and show preservation of ergodic coverage guarantees for a class of 2D and 3D coverage problems.