🤖 AI Summary
Vision-language models (VLMs) excel at qualitative spatial reasoning but lack the centimeter-level metric precision required for robotic manipulation—primarily because they ignore geometric cues from depth sensors and camera calibration, reducing geometric reasoning to pattern recognition. To address this, we propose a tool-augmented geometric reasoning framework that transforms VLMs into engines capable of invoking external geometric computation tools, dynamically generating and executing Python code for high-precision spatial inference. Our method employs a two-stage training strategy—supervised fine-tuning followed by reinforcement fine-tuning—with a hierarchical reward design and leverages the newly introduced TIGeR-300K dataset to support robust tool-calling learning. Evaluated on geometric reasoning benchmarks, our approach achieves state-of-the-art performance; on real-world robotic tasks, it enables centimeter-accurate pose estimation, trajectory generation, and spatial verification.
📝 Abstract
Vision-Language Models (VLMs) have shown remarkable capabilities in spatial reasoning, yet they remain fundamentally limited to qualitative precision and lack the computational precision required for real-world robotics. Current approaches fail to leverage metric cues from depth sensors and camera calibration, instead reducing geometric problems to pattern recognition tasks that cannot deliver the centimeter-level accuracy essential for robotic manipulation. We present TIGeR (Tool-Integrated Geometric Reasoning), a novel framework that transforms VLMs from perceptual estimators to geometric computers by enabling them to generate and execute precise geometric computations through external tools. Rather than attempting to internalize complex geometric operations within neural networks, TIGeR empowers models to recognize geometric reasoning requirements, synthesize appropriate computational code, and invoke specialized libraries for exact calculations. To support this paradigm, we introduce TIGeR-300K, a comprehensive tool-invocation-oriented dataset covering point transformations, pose estimation, trajectory generation, and spatial compatibility verification, complete with tool invocation sequences and intermediate computations. Through a two-stage training pipeline combining supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT) with our proposed hierarchical reward design, TIGeR achieves SOTA performance on geometric reasoning benchmarks while demonstrating centimeter-level precision in real-world robotic manipulation tasks.