🤖 AI Summary
Multimodal large language models (MLLMs) exhibit poor generalization on geometric reasoning tasks, primarily due to the scarcity of high-quality geometry-oriented image-text pairs and the limited coverage of unseen geometric configurations by existing template-based synthetic data generation methods.
Method: We propose a Reinforcement Learning with Verifiable Rewards (RLVR) framework that starts from 50 fundamental geometric relations and leverages mathematically verifiable solution accuracy as a reward signal to dynamically optimize geometric image captioning—bypassing rigid template dependency.
Contribution/Results: The resulting high-fidelity geometry-aware dataset significantly improves cross-distribution generalization: +2.8–4.8% accuracy on non-geometric mathematical reasoning benchmarks (MathVista and MathVerse), and +2.4–3.9% average accuracy across diverse domains in MMMU. These gains demonstrate a substantive enhancement in general mathematical reasoning capability.
📝 Abstract
Multimodal large language models have various practical applications that demand strong reasoning abilities. Despite recent advancements, these models still struggle to solve complex geometric problems. A key challenge stems from the lack of high-quality image-text pair datasets for understanding geometric images. Furthermore, most template-based data synthesis pipelines typically fail to generalize to questions beyond their predefined templates. In this paper, we bridge this gap by introducing a complementary process of Reinforcement Learning with Verifiable Rewards (RLVR) into the data generation pipeline. By adopting RLVR to refine captions for geometric images synthesized from 50 basic geometric relations and using reward signals derived from mathematical problem-solving tasks, our pipeline successfully captures the key features of geometry problem-solving. This enables better task generalization and yields non-trivial improvements. Furthermore, even in out-of-distribution scenarios, the generated dataset enhances the general reasoning capabilities of multimodal large language models, yielding accuracy improvements of $2.8% ext{-}4.8%$ in statistics, arithmetic, algebraic, and numerical tasks with non-geometric input images of MathVista and MathVerse, along with $2.4% ext{-}3.9%$ improvements in Art, Design, Tech, and Engineering tasks in MMMU.