120 Minutes and a Laptop: Minimalist Image-goal Navigation via Unsupervised Exploration and Offline RL

๐Ÿ“… 2026-03-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the high deployment cost of image-based object navigation, which typically relies on large-scale datasets, pretrained models, and substantial computational resources. The authors formulate the task as an offline goal-conditioned reinforcement learning problem and integrate unsupervised exploration, hindsight goal relabeling, and offline policy learning to enable end-to-end development and deployment of real-world navigation policies using only consumer-grade hardware within 120 minutesโ€”without any human intervention. The proposed approach significantly improves exploration efficiency over zero-shot baselines in both simulation and real-world environments and demonstrates favorable performance scaling with data volume, substantially lowering the barrier to practical deployment.
๐Ÿ“ Abstract
The prevailing paradigm for image-goal visual navigation often assumes access to large-scale datasets, substantial pretraining, and significant computational resources. In this work, we challenge this assumption. We show that we can collect a dataset, train an in-domain policy, and deploy it to the real world (1) in less than 120 minutes, (2) on a consumer laptop, (3) without any human intervention. Our method, MINav, formulates image-goal navigation as an offline goal-conditioned reinforcement learning problem, combining unsupervised data collection with hindsight goal relabeling and offline policy learning. Experiments in simulation and the real world show that MINav improves exploration efficiency, outperforms zero-shot navigation baselines in target environments, and scales favorably with dataset size. These results suggest that effective real-world robotic learning can be achieved with high computational efficiency, lowering the barrier to rapid policy prototyping and deployment.
Problem

Research questions and friction points this paper is trying to address.

image-goal navigation
computational efficiency
real-world deployment
data efficiency
robotic learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

image-goal navigation
offline reinforcement learning
unsupervised exploration
hindsight relabeling
minimalist robotics
๐Ÿ”Ž Similar Papers
No similar papers found.