History-Conditioned Spatio-Temporal Visual Token Pruning for Efficient Vision-Language Navigation

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-Language Navigation (VLN) faces practical deployment challenges due to high computational overhead, which hinders real-time performance. This work proposes a training-free spatiotemporal visual token pruning framework that enables efficient long-horizon inference without modifying the pretrained model. By integrating spatial token selection from the current view with spatiotemporal compression of historical memory, the method introduces— for the first time in VLN systems—history-conditioned spatiotemporal token pruning. It leverages attention-driven token importance estimation and query-guided spatiotemporal filtering, offering a plug-and-play solution that requires no retraining. Evaluated on standard VLN benchmarks, the approach significantly outperforms existing pruning methods, maintaining high navigation accuracy even under extreme pruning ratios. Its low-latency and reliable instruction-following capability are further validated on a Unitree Go2 quadruped robot.

Technology Category

Application Category

📝 Abstract
Vision-Language Navigation (VLN) enables robots to follow natural-language instructions in visually grounded environments, serving as a key capability for embodied robotic systems. Recent Vision-Language-Action (VLA) models have demonstrated strong navigation performance, but their high computational cost introduces latency that limits real-time deployment. We propose a training-free spatio-temporal vision token pruning framework tailored to VLA-based VLN. We apply spatial token selection to the current view, alongside spatio-temporal compression for historical memories, enabling efficient long-horizon inference while reducing redundant computation. Leveraging attention-based token importance and query-guided spatio-temporal filtering, the proposed approach preserves navigation-relevant information without retraining or modifying pretrained models, allowing plug-and-play integration into existing VLA systems. Through experiments on standard VLN benchmarks, we confirm that our method significantly outperforms existing pruning strategies. It successfully preserves superior navigation accuracy under extreme pruning scenarios, all while maintaining the highly competitive inference efficiency. Real-world deployment on a Unitree Go2 quadruped robot further validates reliable and low-latency instruction-following navigation under practical robotic constraints. We hope this work helps bridge the gap between large-scale multimodal modeling and efficient, real-time embodied deployment in robotic navigation systems.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Navigation
computational efficiency
real-time deployment
token pruning
embodied robotics
Innovation

Methods, ideas, or system contributions that make the work stand out.

token pruning
vision-language navigation
spatio-temporal compression
training-free
embodied AI
🔎 Similar Papers
No similar papers found.