🤖 AI Summary
Large Vision-Language Models (LVLMs) exhibit significant deficiencies in spatial understanding; existing approaches rely on costly manual annotations, specialized sensors, or constrained environments—limiting scalability. To address this, we propose Spatial-SSRL, a self-supervised reinforcement learning paradigm that requires no human annotation. Spatial-SSRL introduces five verifiable pretraining tasks—image patch reordering, flip detection, inpainting, depth ordering, and 3D position prediction—to jointly model 2D and 3D spatial structure in a unified framework. Leveraging only standard RGB or RGB-D images, it generates reliable supervisory signals without external supervision. Evaluated on seven spatial understanding benchmarks, Spatial-SSRL improves over the Qwen2.5-VL baseline by +4.63% (3B) and +3.89% (7B), substantially enhancing spatial reasoning while preserving general visual-language comprehension capabilities.
📝 Abstract
Spatial understanding remains a weakness of Large Vision-Language Models (LVLMs). Existing supervised fine-tuning (SFT) and recent reinforcement learning with verifiable rewards (RLVR) pipelines depend on costly supervision, specialized tools, or constrained environments that limit scale. We introduce Spatial-SSRL, a self-supervised RL paradigm that derives verifiable signals directly from ordinary RGB or RGB-D images. Spatial-SSRL automatically formulates five pretext tasks that capture 2D and 3D spatial structure: shuffled patch reordering, flipped patch recognition, cropped patch inpainting, regional depth ordering, and relative 3D position prediction. These tasks provide ground-truth answers that are easy to verify and require no human or LVLM annotation. Training on our tasks substantially improves spatial reasoning while preserving general visual capabilities. On seven spatial understanding benchmarks in both image and video settings, Spatial-SSRL delivers average accuracy gains of 4.63% (3B) and 3.89% (7B) over the Qwen2.5-VL baselines. Our results show that simple, intrinsic supervision enables RLVR at scale and provides a practical route to stronger spatial intelligence in LVLMs.