OpenThinkIMG: Learning to Think with Images via Visual Tool Reinforcement Learning

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) struggle to solve complex visual reasoning tasks through human-like interactive visual cognition, hindered by the lack of a unified visual tool infrastructure, difficulties in generating high-quality interactive data, and challenges in training robust interactive agents. Method: This paper introduces V-ToolRL—the first open-source, end-to-end framework for vision-tool reinforcement learning—establishing a novel paradigm that defines standardized visual tool interfaces and an extensible mechanism for generating interactive trajectories, enabling LVLMs to autonomously discover optimal tool invocation policies beyond static supervised fine-tuning. Contribution/Results: Built upon Qwen2-VL-2B, our tool-augmented visual reasoning agent achieves a +28.83 improvement over the SFT baseline on chart reasoning tasks, significantly outperforming Taco/CogCom (+12.7) and GPT-4.1 (+8.68).

Technology Category

Application Category

📝 Abstract
While humans can flexibly leverage interactive visual cognition for complex problem-solving, enabling Large Vision-Language Models (LVLMs) to learn similarly adaptive behaviors with visual tools remains challenging. A significant hurdle is the current lack of standardized infrastructure, which hinders integrating diverse tools, generating rich interaction data, and training robust agents effectively. To address these gaps, we introduce OpenThinkIMG, the first open-source, comprehensive end-to-end framework for tool-augmented LVLMs. It features standardized vision tool interfaces, scalable trajectory generation for policy initialization, and a flexible training environment. Furthermore, considering supervised fine-tuning (SFT) on static demonstrations offers limited policy generalization for dynamic tool invocation, we propose a novel reinforcement learning (RL) framework V-ToolRL to train LVLMs to learn adaptive policies for invoking external vision tools. V-ToolRL enables LVLMs to autonomously discover optimal tool-usage strategies by directly optimizing for task success using feedback from tool interactions. We empirically validate V-ToolRL on challenging chart reasoning tasks. Our RL-trained agent, built upon a Qwen2-VL-2B, significantly outperforms its SFT-initialized counterpart (+28.83 points) and surpasses established supervised tool-learning baselines like Taco and CogCom by an average of +12.7 points. Notably, it also surpasses prominent closed-source models like GPT-4.1 by +8.68 accuracy points. We hope OpenThinkIMG can serve as a foundational framework for advancing dynamic, tool-augmented visual reasoning, helping the community develop AI agents that can genuinely"think with images".
Problem

Research questions and friction points this paper is trying to address.

Enabling LVLMs to learn adaptive visual tool usage
Addressing lack of standardized infrastructure for vision tools
Improving policy generalization via reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized vision tool interfaces for LVLMs
Reinforcement learning framework V-ToolRL for adaptive policies
Scalable trajectory generation for policy initialization
🔎 Similar Papers
No similar papers found.