VisualSphinx: Large-Scale Synthetic Vision Logic Puzzles for RL

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) suffer from limited logical reasoning capabilities—particularly in chart understanding, spatial reasoning, and geometric inference—due to the scarcity of large-scale, structured, and logically explicit multimodal training data. To address this, we introduce LogicVLM, the first large-scale synthetic visual-logical reasoning dataset explicitly designed for enhancing VLMs’ logical reasoning competence, covering domains including chart interpretation, algebra, arithmetic, and geometry. We propose a novel “rule-to-image” interpretable synthesis pipeline that ensures precise alignment between formal logical semantics and visual representations via rule formalization and code-driven, grounded image generation. The pipeline is integrated with an end-to-end GRPO-based reinforcement learning framework for training. Experiments demonstrate substantial performance gains on established logical reasoning benchmarks, along with strong cross-domain generalization to unseen reasoning tasks.

Technology Category

Application Category

📝 Abstract
Vision language models (VLMs) are expected to perform effective multimodal reasoning and make logically coherent decisions, which is critical to tasks such as diagram understanding and spatial problem solving. However, current VLM reasoning lacks large-scale and well-structured training datasets. To bridge this gap, we propose VisualSphinx, a first-of-its-kind large-scale synthetic visual logical reasoning training data. To tackle the challenge of image synthesis with grounding answers, we propose a rule-to-image synthesis pipeline, which extracts and expands puzzle rules from seed questions and generates the code of grounding synthesis image synthesis for puzzle sample assembly. Experiments demonstrate that VLM trained using GRPO on VisualSphinx benefit from logical coherence and readability of our dataset and exhibit improved performance on logical reasoning tasks. The enhanced reasoning capabilities developed from VisualSphinx also benefit other reasoning tasks such as algebraic reasoning, arithmetic reasoning and geometry reasoning.
Problem

Research questions and friction points this paper is trying to address.

Lack of large-scale structured training data for VLM reasoning
Challenges in image synthesis with grounded logical answers
Need for improved logical coherence in vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale synthetic visual logical reasoning data
Rule-to-image synthesis pipeline for puzzles
Enhanced VLM performance on reasoning tasks
🔎 Similar Papers
No similar papers found.