Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection

๐Ÿ“… 2025-12-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current vision-language models (VLMs) process only static images and lack the dynamic viewpoint selection capability essential for embodied intelligence. Method: We introduce the Vision-Grounded Active Viewpoint Selection (VG-AVS) taskโ€”selecting the most informative next viewpoint solely from a single input image, without scene memory or external knowledge. We establish the first purely vision-driven active viewpoint selection paradigm, construct the first synthetic paired view-query dataset, and propose a memory-free, end-to-end trainable VLM-based viewpoint policy framework. Our approach integrates pretrained VLMs with supervised fine-tuning and reinforcement learning, leveraging synthetic data generation and real-world domain transfer evaluation. Contribution/Results: Experiments demonstrate strong generalization in both synthetic and real environments; integrating our model into an Embodied Question Answering (EQA) system significantly improves downstream question-answering accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision Language Models (VLMs) excel at visual question answering (VQA) but remain limited to snapshot vision, reasoning from static images. In contrast, embodied agents require ambulatory vision, actively moving to obtain more informative views. We introduce Visually Grounded Active View Selection (VG-AVS), a task that selects the most informative next viewpoint using only the visual information in the current image, without relying on scene memory or external knowledge. To support this task, we construct a synthetic dataset with automatically generated paired query-target views and question-answer prompts. We also propose a framework that fine-tunes pretrained VLMs through supervised fine-tuning (SFT) followed by RL-based policy optimization. Our approach achieves strong question answering performance based on viewpoint selection and generalizes robustly to unseen synthetic and real scenes. Furthermore, incorporating our learned VG-AVS framework into existing scene-exploration-based EQA systems improves downstream question-answering accuracy.
Problem

Research questions and friction points this paper is trying to address.

Enabling embodied agents to actively select informative viewpoints
Developing a task for visual-grounded active view selection without external memory
Improving vision language models for dynamic scene exploration and question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes VLMs with SFT and RL optimization
Selects next viewpoint using only current visual information
Uses synthetic dataset for training and generalization