CodeV: Code with Images for Faithful Visual Reasoning via Tool-Aware Policy Optimization

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language agents exhibit reasoning unfaithfulness when invoking image manipulation tools—often triggering tools on irrelevant regions or ignoring tool outputs while still arriving at correct answers. Method: We propose the Tool-Aware Policy Optimization (TAPO) framework, which models visual tools as executable Python functions and introduces a faithfulness evaluation protocol that provides dense, process-level reinforcement learning rewards for tool input-output interactions. TAPO employs a two-stage training pipeline—supervised fine-tuning (SFT) followed by reinforcement learning (RL)—integrating the GRPO algorithm with a novel tool-aware reward mechanism. Contribution/Results: Experiments demonstrate that TAPO significantly improves tool usage faithfulness in visual search tasks while maintaining high answer accuracy. Moreover, it achieves state-of-the-art performance on multimodal understanding and mathematical reasoning benchmarks, validating its effectiveness in aligning agent behavior with tool execution semantics.

Technology Category

Application Category

📝 Abstract
Agentic vision-language models are increasingly trained to "think with images" by calling image operations. However, we show that high final-answer accuracy often hides unfaithful visual reasoning: models may invoke tools on irrelevant regions or ignore tool outputs entirely, yet still guess the correct answer. In this work, we first propose a faithfulness evaluation protocol that measures whether intermediate visual tool outputs (e.g., crops) actually contain the queried evidence. This reveals that recent visual agents achieve high final-answer accuracy but exhibit low rates of faithful tool-use on visual search benchmarks. We then introduce CodeV, a code-based visual agent trained with Tool-Aware Policy Optimization (TAPO). TAPO is a process-level RL framework that augments GRPO with dense rewards defined directly on visual tool inputs and outputs, rather than on chain-of-thought tokens, making supervision easier to verify and less susceptible to reward hacking. CodeV represents visual tools as executable Python code, and TAPO assigns step-wise rewards based solely on the question and tool output, encouraging both necessary and evidence-consistent tool use. In a two-stage SFT+RL pipeline, CodeV achieves competitive or superior accuracy while substantially increasing faithful tool-use rates on related visual search benchmarks. Beyond visual search, CodeV attains strong performance on a range of multimodal reasoning and math benchmarks, suggesting that explicitly supervising intermediate tool behavior is crucial for building trustworthy, agentic visual reasoning systems.
Problem

Research questions and friction points this paper is trying to address.

Address unfaithful visual reasoning in vision-language models using tools
Improve faithfulness of tool-use in visual search through policy optimization
Ensure visual agents use tools on relevant regions with consistent evidence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tool-Aware Policy Optimization for visual tool supervision
Code-based agent representing tools as executable Python
Process-level RL with rewards on tool inputs and outputs
🔎 Similar Papers
No similar papers found.