RVN-Bench: A Benchmark for Reactive Visual Navigation

πŸ“… 2026-03-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing visual navigation benchmarks often overlook collision handling or focus primarily on outdoor environments, limiting their applicability to safe indoor navigation for mobile robots in cluttered settings. To address this gap, this work proposes the first indoor, collision-aware reactive visual navigation benchmark, built upon Habitat 2.0 and high-fidelity HM3D scenes. The benchmark provides a standardized environment supporting both online reinforcement learning and offline training, along with a toolchain for generating image data from both successful and collision-prone trajectories. Experimental results demonstrate that policies trained within this framework exhibit strong generalization capabilities in unseen environments, confirming the benchmark’s effectiveness in advancing research on safe and robust visual navigation.

Technology Category

Application Category

πŸ“ Abstract
Safe visual navigation is critical for indoor mobile robots operating in cluttered environments. Existing benchmarks, however, often neglect collisions or are designed for outdoor scenarios, making them unsuitable for indoor visual navigation. To address this limitation, we introduce the reactive visual navigation benchmark (RVN-Bench), a collision-aware benchmark for indoor mobile robots. In RVN-Bench, an agent must reach sequential goal positions in previously unseen environments using only visual observations and no prior map, while avoiding collisions. Built on the Habitat 2.0 simulator and leveraging high-fidelity HM3D scenes, RVN-Bench provides large-scale, diverse indoor environments, defines a collision-aware navigation task and evaluation metrics, and offers tools for standardized training and benchmarking. RVN-Bench supports both online and offline learning by offering an environment for online reinforcement learning, a trajectory image dataset generator, and tools for producing negative trajectory image datasets that capture collision events. Experiments show that policies trained on RVN-Bench generalize effectively to unseen environments, demonstrating its value as a standardized benchmark for safe and robust visual navigation. Code and additional materials are available at: https://rvn-bench.github.io/.
Problem

Research questions and friction points this paper is trying to address.

visual navigation
collision avoidance
indoor environments
mobile robots
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

reactive visual navigation
collision-aware benchmark
indoor mobile robots
Habitat 2.0
offline trajectory dataset
πŸ”Ž Similar Papers
No similar papers found.
Jaewon Lee
Jaewon Lee
Korea University
3D Vision
J
Jaeseok Heo
Department of Electrical and Computer Engineering, Seoul National University (SNU) and Automation and Systems Research Institute (ASRI) and Sequor Robotics Inc., Seoul, Korea (Republic of)
Gunmin Lee
Gunmin Lee
Seoul National University
Machine LearningRoboticsAutonomous Driving
H
Howoong Jun
Interdisciplinary Program in Artificial Intelligence, Seoul National University (SNU) and Automation and Systems Research Institute (ASRI) and Sequor Robotics Inc., Seoul, Korea (Republic of)
J
Jeongwoo Oh
Sequor Robotics Inc., Seoul, Korea (Republic of)
Songhwai Oh
Songhwai Oh
Department of Electrical and Computer Engineering, Seoul National University, Korea
RoboticsComputer VisionCyber-Physical Systems