Embodied Navigation with Auxiliary Task of Action Description Prediction

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of balancing interpretability and performance in indoor multimodal robot navigation, this paper proposes a reinforcement learning (RL) framework integrated with action-language description generation. The method introduces action description generation as an auxiliary RL task—its first such formulation—and employs knowledge distillation from pretrained vision-language models (e.g., CLIP-ViL) to generate high-quality pseudo-labels, effectively mitigating the scarcity of human-annotated language data. By jointly optimizing navigation policies and natural-language action explanations, the approach achieves state-of-the-art performance on semantic audio-visual navigation benchmarks. Crucially, it provides real-time, semantically accurate action interpretations, enhancing system transparency and trustworthiness without compromising task performance.

Technology Category

Application Category

📝 Abstract
The field of multimodal robot navigation in indoor environments has garnered significant attention in recent years. However, as tasks and methods become more advanced, the action decision systems tend to become more complex and operate as black-boxes. For a reliable system, the ability to explain or describe its decisions is crucial; however, there tends to be a trade-off in that explainable systems can not outperform non-explainable systems in terms of performance. In this paper, we propose incorporating the task of describing actions in language into the reinforcement learning of navigation as an auxiliary task. Existing studies have found it difficult to incorporate describing actions into reinforcement learning due to the absence of ground-truth data. We address this issue by leveraging knowledge distillation from pre-trained description generation models, such as vision-language models. We comprehensively evaluate our approach across various navigation tasks, demonstrating that it can describe actions while attaining high navigation performance. Furthermore, it achieves state-of-the-art performance in the particularly challenging multimodal navigation task of semantic audio-visual navigation.
Problem

Research questions and friction points this paper is trying to address.

Improving explainability in embodied robot navigation systems
Overcoming performance trade-offs in explainable navigation decision systems
Addressing lack of ground-truth data for action description in navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action description prediction as auxiliary task
Knowledge distillation from pre-trained language models
Maintains navigation performance while providing explanations
🔎 Similar Papers
No similar papers found.