RLS3: RL-Based Synthetic Sample Selection to Enhance Spatial Reasoning in Vision-Language Models for Indoor Autonomous Perception

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) exhibit weak spatial reasoning capabilities in indoor environments due to scarce and imbalanced fine-tuning data. Method: We propose a reinforcement learning (RL)-driven synthetic data generation framework, wherein an RL agent—trained via PPO—is modeled as a learnable data sampler that dynamically manipulates the AI2-THOR simulator to generate high-information, fine-grained controllable indoor scenes with precise ground-truth annotations, guided by task vulnerability. The method integrates CLIP/ViLT multimodal representations and differentiable rendering for end-to-end joint optimization. Contribution/Results: Experiments demonstrate consistent improvements across multiple spatial reasoning benchmarks: +12.7–19.3% absolute accuracy gain for VLMs, 3.2× higher data efficiency, and significantly enhanced contextual awareness and cross-scene generalization.

Technology Category

Application Category

📝 Abstract
Vision-language model (VLM) fine-tuning for application-specific visual grounding based on natural language instructions has become one of the most popular approaches for learning-enabled autonomous systems. However, such fine-tuning relies heavily on high-quality datasets to achieve successful performance in various downstream tasks. Additionally, VLMs often encounter limitations due to insufficient and imbalanced fine-tuning data. To address these issues, we propose a new generalizable framework to improve VLM fine-tuning by integrating it with a reinforcement learning (RL) agent. Our method utilizes the RL agent to manipulate objects within an indoor setting to create synthetic data for fine-tuning to address certain vulnerabilities of the VLM. Specifically, we use the performance of the VLM to provide feedback to the RL agent to generate informative data that efficiently fine-tune the VLM over the targeted task (e.g. spatial reasoning). The key contribution of this work is developing a framework where the RL agent serves as an informative data sampling tool and assists the VLM in order to enhance performance and address task-specific vulnerabilities. By targeting the data sampling process to address the weaknesses of the VLM, we can effectively train a more context-aware model. In addition, generating synthetic data allows us to have precise control over each scene and generate granular ground truth captions. Our results show that the proposed data generation approach improves the spatial reasoning performance of VLMs, which demonstrates the benefits of using RL-guided data generation in vision-language tasks.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Spatial Relationship Understanding
Data Insufficiency and Imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Visual Language Models
Spatial Understanding
🔎 Similar Papers
No similar papers found.