🤖 AI Summary
Existing visual navigation methods often simplify feature encoding and temporal pooling, thereby failing to preserve fine-grained spatiotemporal structures essential for accurate action prediction and progress estimation. To address this limitation, this work proposes a unified spatiotemporal representation framework that leverages goal-conditioned visual encoding to extract features from image sequences and target observations. It introduces a dynamic graph aggregation mechanism integrating spatial graph reasoning, hybrid temporal shifting, and multi-resolution differential-aware convolutions to effectively model both spatial relationships and temporal dynamics. The proposed approach significantly enhances navigation performance, achieving consistent improvements across multiple benchmarks while offering a generalizable visual backbone architecture.
📝 Abstract
Visual navigation requires the robot to reach a specified goal such as an image, based on a sequence of first-person visual observations. While recent learning-based approaches have made significant progress, they often focus on improving policy heads or decision strategies while relying on simplistic feature encoders and temporal pooling to represent visual input. This leads to the loss of fine-grained spatial and temporal structure, ultimately limiting accurate action prediction and progress estimation. In this paper, we propose a unified spatio-temporal representation framework that enhances visual encoding for robotic navigation. Our approach extracts features from both image sequences and goal observations, and fuses them using the designed spatio-temporal fusion module. This module performs spatial graph reasoning within each frame and models temporal dynamics using a hybrid temporal shift module combined with multi-resolution difference-aware convolution. Experimental results demonstrate that our approach consistently improves navigation performance and offers a generalizable visual backbone for goal-conditioned control. Code is available at \href{https://github.com/hren20/STRNet}{https://github.com/hren20/STRNet}.