🤖 AI Summary
This study addresses the limited cross-domain reasoning generalization of multimodal large language models (MLLMs). We propose Visual Game Learning (ViGaL), a novel pretraining paradigm that employs synthetic, rule-based arcade games (e.g., Snake) as controllable proxy tasks. Using Proximal Policy Optimization (PPO)-based reinforcement learning, ViGaL enables MLLMs to autonomously acquire spatial, temporal, and causal reasoning patterns through pure visual–action interaction—without requiring mathematical solution annotations or explicit reasoning supervision. To our knowledge, this is the first work to leverage structured game environments as scalable, general-purpose reasoning pretraining media. ViGaL jointly optimizes multimodal perception, instruction following, and policy optimization in a 7B-parameter MLLM. Experiments demonstrate that ViGaL surpasses task-specific fine-tuned models on multimodal reasoning benchmarks—including MathVista and MMMU—while fully preserving the base model’s performance on general visual understanding tasks.
📝 Abstract
Developing generalizable reasoning capabilities in multimodal large language models (MLLMs) remains challenging. Motivated by cognitive science literature suggesting that gameplay promotes transferable cognitive skills, we propose a novel post-training paradigm, Visual Game Learning, or ViGaL, where MLLMs develop out-of-domain generalization of multimodal reasoning through playing arcade-like games. Specifically, we show that post-training a 7B-parameter MLLM via reinforcement learning (RL) on simple arcade-like games, e.g. Snake, significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, without seeing any worked solutions, equations, or diagrams during RL, suggesting the capture of transferable reasoning skills. Remarkably, our model outperforms specialist models tuned on multimodal reasoning data in multimodal reasoning benchmarks, while preserving the base model's performance on general visual benchmarks, a challenge where specialist models often fall short. Our findings suggest a new post-training paradigm: synthetic, rule-based games can serve as controllable and scalable pre-text tasks that unlock generalizable multimodal reasoning abilities in MLLMs.