🤖 AI Summary
This work addresses the lack of standardized reinforcement learning environments in visual agent research, which has hindered systematic analysis and fair comparisons. The authors introduce a unified and extensible platform comprising 179 procedurally generated visual environments spanning ten domains, enabling controllable difficulty levels and reproducible experimentation. For the first time, this platform facilitates a systematic evaluation of visual agents’ learning mechanisms and generalization capabilities. The study reveals that observation guidance—such as captions or rules—plays a more decisive role in training success than the choice of reinforcement learning algorithm, and that multi-turn interactions substantially enhance cross-domain transfer. Furthermore, training on diverse tasks promotes broad generalization, whereas narrow-domain training often leads to negative transfer. The platform is publicly released to advance standardization in visual agent research.
📝 Abstract
As agentic systems increasingly rely on reinforcement learning from verifiable rewards, standardized ``gym'' infrastructure has become essential for rapid iteration, reproducibility, and fair comparison. Vision agents lack such infrastructure, limiting systematic study of what drives their learning and where current models fall short. We introduce \textbf{Gym-V}, a unified platform of 179 procedurally generated visual environments across 10 domains with controllable difficulty, enabling controlled experiments that were previously infeasible across fragmented toolkits. Using it, we find that observation scaffolding is more decisive for training success than the choice of RL algorithm, with captions and game rules determining whether learning succeeds at all. Cross-domain transfer experiments further show that training on diverse task categories generalizes broadly while narrow training can cause negative transfer, with multi-turn interaction amplifying all of these effects. Gym-V is released as a convenient foundation for training environments and evaluation toolkits, aiming to accelerate future research on agentic VLMs.