Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised Reinforcement Learning

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) exhibit significant deficiencies in spatial understanding; existing approaches rely on costly manual annotations, specialized sensors, or constrained environments—limiting scalability. To address this, we propose Spatial-SSRL, a self-supervised reinforcement learning paradigm that requires no human annotation. Spatial-SSRL introduces five verifiable pretraining tasks—image patch reordering, flip detection, inpainting, depth ordering, and 3D position prediction—to jointly model 2D and 3D spatial structure in a unified framework. Leveraging only standard RGB or RGB-D images, it generates reliable supervisory signals without external supervision. Evaluated on seven spatial understanding benchmarks, Spatial-SSRL improves over the Qwen2.5-VL baseline by +4.63% (3B) and +3.89% (7B), substantially enhancing spatial reasoning while preserving general visual-language comprehension capabilities.

Technology Category

Application Category

📝 Abstract
Spatial understanding remains a weakness of Large Vision-Language Models (LVLMs). Existing supervised fine-tuning (SFT) and recent reinforcement learning with verifiable rewards (RLVR) pipelines depend on costly supervision, specialized tools, or constrained environments that limit scale. We introduce Spatial-SSRL, a self-supervised RL paradigm that derives verifiable signals directly from ordinary RGB or RGB-D images. Spatial-SSRL automatically formulates five pretext tasks that capture 2D and 3D spatial structure: shuffled patch reordering, flipped patch recognition, cropped patch inpainting, regional depth ordering, and relative 3D position prediction. These tasks provide ground-truth answers that are easy to verify and require no human or LVLM annotation. Training on our tasks substantially improves spatial reasoning while preserving general visual capabilities. On seven spatial understanding benchmarks in both image and video settings, Spatial-SSRL delivers average accuracy gains of 4.63% (3B) and 3.89% (7B) over the Qwen2.5-VL baselines. Our results show that simple, intrinsic supervision enables RLVR at scale and provides a practical route to stronger spatial intelligence in LVLMs.
Problem

Research questions and friction points this paper is trying to address.

Improving spatial understanding in large vision-language models
Reducing dependency on costly supervised training methods
Developing self-supervised reinforcement learning for spatial reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised reinforcement learning from RGB images
Automated pretext tasks for spatial structure learning
Verifiable signals without human annotation requirements
🔎 Similar Papers
No similar papers found.