🤖 AI Summary
This work addresses the lack of systematic evaluation of robustness and trustworthiness in embodied navigation systems under real-world input perturbations, such as distortions in RGB and depth images and variations in language instructions. To bridge this gap, the authors propose NavTrust, a unified benchmark that establishes the first comprehensive evaluation framework encompassing visual-depth perturbations and linguistic instruction variations. Leveraging advanced models including Uni-NaVid and ETPNav, the study systematically assesses seven state-of-the-art methods across both simulated and real robotic platforms. The results reveal significant performance degradation under perturbations. In response, the authors introduce four mitigation strategies that effectively enhance robustness, with improvements validated on physical robots, thereby establishing a new paradigm for trustworthy embodied navigation evaluation.
📝 Abstract
There are two major categories of embodied navigation: Vision-Language Navigation (VLN), where agents navigate by following natural language instructions; and Object-Goal Navigation (OGN), where agents navigate to a specified target object. However, existing work primarily evaluates model performance under nominal conditions, overlooking the potential corruptions that arise in real-world settings. To address this gap, we present NavTrust, a unified benchmark that systematically corrupts input modalities, including RGB, depth, and instructions, in realistic scenarios and evaluates their impact on navigation performance. To our best knowledge, NavTrust is the first benchmark that exposes embodied navigation agents to diverse RGB-Depth corruptions and instruction variations in a unified framework. Our extensive evaluation of seven state-of-the-art approaches reveals substantial performance degradation under realistic corruptions, which highlights critical robustness gaps and provides a roadmap toward more trustworthy embodied navigation systems. Furthermore, we systematically evaluate four distinct mitigation strategies to enhance robustness against RGB-Depth and instructions corruptions. Our base models include Uni-NaVid and ETPNav. We deployed them on a real mobile robot and observed improved robustness to corruptions. The project website is: https://navtrust.github.io.