NavTrust: Benchmarking Trustworthiness for Embodied Navigation

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of robustness and trustworthiness in embodied navigation systems under real-world input perturbations, such as distortions in RGB and depth images and variations in language instructions. To bridge this gap, the authors propose NavTrust, a unified benchmark that establishes the first comprehensive evaluation framework encompassing visual-depth perturbations and linguistic instruction variations. Leveraging advanced models including Uni-NaVid and ETPNav, the study systematically assesses seven state-of-the-art methods across both simulated and real robotic platforms. The results reveal significant performance degradation under perturbations. In response, the authors introduce four mitigation strategies that effectively enhance robustness, with improvements validated on physical robots, thereby establishing a new paradigm for trustworthy embodied navigation evaluation.

Technology Category

Application Category

📝 Abstract
There are two major categories of embodied navigation: Vision-Language Navigation (VLN), where agents navigate by following natural language instructions; and Object-Goal Navigation (OGN), where agents navigate to a specified target object. However, existing work primarily evaluates model performance under nominal conditions, overlooking the potential corruptions that arise in real-world settings. To address this gap, we present NavTrust, a unified benchmark that systematically corrupts input modalities, including RGB, depth, and instructions, in realistic scenarios and evaluates their impact on navigation performance. To our best knowledge, NavTrust is the first benchmark that exposes embodied navigation agents to diverse RGB-Depth corruptions and instruction variations in a unified framework. Our extensive evaluation of seven state-of-the-art approaches reveals substantial performance degradation under realistic corruptions, which highlights critical robustness gaps and provides a roadmap toward more trustworthy embodied navigation systems. Furthermore, we systematically evaluate four distinct mitigation strategies to enhance robustness against RGB-Depth and instructions corruptions. Our base models include Uni-NaVid and ETPNav. We deployed them on a real mobile robot and observed improved robustness to corruptions. The project website is: https://navtrust.github.io.
Problem

Research questions and friction points this paper is trying to address.

embodied navigation
trustworthiness
input corruption
robustness
benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

embodied navigation
trustworthiness benchmark
multimodal corruption
robustness evaluation
vision-language navigation
🔎 Similar Papers
No similar papers found.
H
Huaide Jiang
Trustworthy Autonomous Systems Laboratory at the University of California, Riverside, CA, USA
Y
Yash Chaudhary
Trustworthy Autonomous Systems Laboratory at the University of California, Riverside, CA, USA
Yuping Wang
Yuping Wang
University of Michigan
Z
Zehao Wang
Trustworthy Autonomous Systems Laboratory at the University of California, Riverside, CA, USA
R
Raghav Sharma
Workday, CA, USA
M
Manan Mehta
University of Southern California, CA, USA
Yang Zhou
Yang Zhou
Texas A&M, Assistant Professor
Intelligent Transportation SystemTraffic Flow TheoryAutomated VehiclesAI Applications
L
Lichao Sun
Lehigh University, PA, USA
Z
Zhiwen Fan
Texas A&M University, TX, USA
Zhengzhong Tu
Zhengzhong Tu
Texas A&M University, Google Research, University of Texas at Austin
Agentic AITrustworthy AIEmbodied AI
Jiachen Li
Jiachen Li
Assistant Professor @UC Riverside; Postdoc @Stanford; PhD @UC Berkeley
Safe Learning & ControlTrustworthy AIRoboticsAutonomous VehiclesMulti-Agent Systems