Vero: An Open RL Recipe for General Visual Reasoning

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing vision-language models in general visual reasoning, which stem from their reliance on proprietary reinforcement learning (RL) data. To overcome this, the authors propose Vero, an open-source model built upon Qwen3-VL-8B-Instruct, trained on Vero-600K—the first publicly available RL dataset encompassing 600,000 samples across six diverse task categories. The study introduces a novel task-routing reward mechanism to unify heterogeneous output formats during training. Evaluated on VeroEval, a benchmark comprising 30 tasks, Vero achieves consistent improvements of 3.7–5.5 points on average and outperforms Qwen3-VL-8B-Thinking—a variant trained with private chain-of-thought data—on 23 tasks. These results underscore the critical role of broad task coverage in scaling reinforcement learning for vision-language models.
📝 Abstract
What does it take to build a visual reasoner that works across charts, science, spatial understanding, and open-ended tasks? The strongest vision-language models (VLMs) show such broad visual reasoning is within reach, but the recipe behind them remains unclear, locked behind proprietary reinforcement learning (RL) pipelines with non-public data. We introduce Vero, a family of fully open VLMs that matches or exceeds existing open-weight models across diverse visual reasoning tasks. We scale RL data and rewards across six broad task categories, constructing Vero-600K, a 600K-sample dataset from 59 datasets, and designing task-routed rewards that handle heterogeneous answer formats. Vero achieves state-of-the-art performance, improving over four base models by 3.7-5.5 points on average across VeroEval, our suite of 30 challenging benchmarks. Starting from Qwen3-VL-8B-Instruct, Vero outperforms Qwen3-VL-8B-Thinking on 23 of 30 benchmarks without additional proprietary thinking data. When trained from the same base model, Vero-600K exceeds existing RL datasets across task categories. Systematic ablations reveal that different task categories elicit qualitatively distinct reasoning patterns that transfer poorly in isolation, suggesting that broad data coverage is the primary driver of strong RL scaling. All data, code, and models are released.
Problem

Research questions and friction points this paper is trying to address.

visual reasoning
vision-language models
reinforcement learning
generalization
open-weight models
Innovation

Methods, ideas, or system contributions that make the work stand out.

open visual reasoning
reinforcement learning
task-routed rewards
Vero-600K
vision-language models
🔎 Similar Papers
No similar papers found.