🤖 AI Summary
In interactive imitation learning, frequent or abrupt expert-agent switching induces action discontinuities, degrading dynamic stability. To address this, we propose CubeDAgger: (1) a threshold-regularized supervised triggering mechanism to suppress spurious or excessive switching; (2) an optimal consensus fusion strategy over multiple candidate actions to enhance decision robustness; and (3) temporally consistent autoregressive colored noise for exploration, ensuring motion continuity. Built upon the EnsembleDAgger framework, CubeDAgger holistically integrates action-space regularization, consensus-based decision-making, and autoregressive stochastic modeling. Experimental results in simulation demonstrate that CubeDAgger significantly reduces dynamic stability violation rates compared to baseline methods, while markedly improving policy robustness and interaction smoothness.
📝 Abstract
Interactive imitation learning makes an agent's control policy robust by stepwise supervisions from an expert. The recent algorithms mostly employ expert-agent switching systems to reduce the expert's burden by limitedly selecting the supervision timing. However, the precise selection is difficult and such a switching causes abrupt changes in actions, damaging the dynamic stability. This paper therefore proposes a novel method, so-called CubeDAgger, which improves robustness while reducing dynamic stability violations by making three improvements to a baseline method, EnsembleDAgger. The first improvement adds a regularization to explicitly activate the threshold for deciding the supervision timing. The second transforms the expert-agent switching system to an optimal consensus system of multiple action candidates. Third, autoregressive colored noise to the actions is introduced to make the stochastic exploration consistent over time. These improvements are verified by simulations, showing that the learned policies are sufficiently robust while maintaining dynamic stability during interaction.