🤖 AI Summary
Vision-language models (VLMs) lack metric-scale spatial reasoning capabilities for embodied tasks and struggle to autonomously orchestrate multimodal visual tools. This work proposes DIRL, a two-stage interactive reinforcement learning framework: Stage I bootstraps a foundational policy using expert demonstrations with individual tools; Stage II enables adaptive, coordinated scheduling of heterogeneous tools—including depth estimation, segmentation, and pose estimation—via full-tool trajectory exploration and interactive feedback. DIRL eliminates hand-crafted prompts and fixed tool chains, enabling end-to-end VLM learning of optimal tool composition policies for the first time. It achieves state-of-the-art performance on RoboSpatial-Home, BLINK, and BOP-ASK benchmarks, improving the RoboSpatial metric by 16% over prior baselines, and is successfully deployed on a real-world 7-DOF robotic manipulator.
📝 Abstract
Vision Language Models (VLMs) demonstrate strong qualitative visual understanding, but struggle with metrically precise spatial reasoning required for embodied applications. The agentic paradigm promises that VLMs can use a wide variety of tools that could augment these capabilities, such as depth estimators, segmentation models, and pose estimators. Yet it remains an open challenge how to realize this vision without solely relying on handcrafted prompting strategies or enforcing fixed, predefined tool pipelines that limit VLMs'ability to discover optimal tool-use patterns. Reinforcement Learning could overcome this gap, but has so far been limited to reasoning with a single visual tool due to the large search space in multi-tool reasoning. We introduce Double Interactive Reinforcement Learning (DIRL), a two-phase training framework where VLMs learn to coordinate multiple tools through interactive exploration and feedback. In the teaching phase, we combine demonstrations from a single tool specialist trained via interactive RL with traces from a frontier model using all tools. In the exploration phase, the model further refines multi-tool coordination through continued RL. Our model, SpaceTools, with tool-augmented spatial reasoning ability, achieves state-of-the-art performance on spatial understanding benchmarks (RoboSpatial-Home, BLINK, BOP-ASK) and demonstrates reliable real-world manipulation using a 7-DOF robot as a tool. DIRL provides substantial improvements over the vanilla SFT (+12% on RoboSpatial) and RL (+16% on RoboSpatial) baselines. Project page: https://spacetools.github.io/.