🤖 AI Summary
Addressing the challenge of balancing interpretability and performance in indoor multimodal robot navigation, this paper proposes a reinforcement learning (RL) framework integrated with action-language description generation. The method introduces action description generation as an auxiliary RL task—its first such formulation—and employs knowledge distillation from pretrained vision-language models (e.g., CLIP-ViL) to generate high-quality pseudo-labels, effectively mitigating the scarcity of human-annotated language data. By jointly optimizing navigation policies and natural-language action explanations, the approach achieves state-of-the-art performance on semantic audio-visual navigation benchmarks. Crucially, it provides real-time, semantically accurate action interpretations, enhancing system transparency and trustworthiness without compromising task performance.
📝 Abstract
The field of multimodal robot navigation in indoor environments has garnered significant attention in recent years. However, as tasks and methods become more advanced, the action decision systems tend to become more complex and operate as black-boxes. For a reliable system, the ability to explain or describe its decisions is crucial; however, there tends to be a trade-off in that explainable systems can not outperform non-explainable systems in terms of performance. In this paper, we propose incorporating the task of describing actions in language into the reinforcement learning of navigation as an auxiliary task. Existing studies have found it difficult to incorporate describing actions into reinforcement learning due to the absence of ground-truth data. We address this issue by leveraging knowledge distillation from pre-trained description generation models, such as vision-language models. We comprehensively evaluate our approach across various navigation tasks, demonstrating that it can describe actions while attaining high navigation performance. Furthermore, it achieves state-of-the-art performance in the particularly challenging multimodal navigation task of semantic audio-visual navigation.