Vision Matters: Simple Visual Perturbations Can Boost Multimodal Math Reasoning

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) suffer from a critical decoupling between visual representation and reasoning processes, leading to insufficient integration of image information in mathematical reasoning. To address this, we propose a lightweight, plug-and-play visual perturbation framework that requires no retraining. Our method introduces three novel, annotation-free perturbation strategies: distractor concatenation, dominance-preserving mixup, and random rotation—designed to enhance perceptual robustness and reasoning consistency over visual content. The framework is fully compatible with open-source MLLMs (e.g., Qwen2.5-VL-7B) and integrates seamlessly via standard post-training paradigms such as SFT, DPO, or GRPO, without architectural modifications. Extensive experiments demonstrate consistent performance gains across multiple mathematical reasoning benchmarks, matching the efficacy of algorithm-level improvements. Crucially, our work provides the first empirical evidence that visual representation quality fundamentally constrains the upper bound of multimodal mathematical reasoning capability.

Technology Category

Application Category

📝 Abstract
Despite the rapid progress of multimodal large language models (MLLMs), they have largely overlooked the importance of visual processing. In a simple yet revealing experiment, we interestingly find that language-only models, when provided with image captions, can achieve comparable or even better performance than MLLMs that consume raw visual inputs. This suggests that current MLLMs may generate accurate visual descriptions but fail to effectively integrate them during reasoning. Motivated by this, we propose a simple visual perturbation framework that enhances perceptual robustness without requiring algorithmic modifications or additional training data. Our approach introduces three targeted perturbations: distractor concatenation, dominance-preserving mixup, and random rotation, that can be easily integrated into existing post-training pipelines including SFT, DPO, and GRPO. Through extensive experiments across multiple datasets, we demonstrate consistent improvements in mathematical reasoning performance, with gains comparable to those achieved through algorithmic changes. Additionally, we achieve competitive performance among open-source 7B RL-tuned models by training Qwen2.5-VL-7B with visual perturbation. Through comprehensive ablation studies, we analyze the effectiveness of different perturbation strategies, revealing that each perturbation type contributes uniquely to different aspects of visual reasoning. Our findings highlight the critical role of visual perturbation in multimodal mathematical reasoning: better reasoning begins with better seeing. Our code is available at https://github.com/YutingLi0606/Vision-Matters.
Problem

Research questions and friction points this paper is trying to address.

MLLMs overlook visual processing importance in reasoning
Current MLLMs fail to integrate visual descriptions effectively
Visual perturbation enhances multimodal math reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simple visual perturbation framework enhances robustness
Three targeted perturbations improve visual reasoning
No algorithm changes or extra data needed
🔎 Similar Papers
No similar papers found.
Y
Yuting Li
School of Computer Science, Shanghai Jiao Tong University
L
Lai Wei
School of Computer Science, Shanghai Jiao Tong University, Zhongguancun Academy
K
Kaipeng Zheng
School of Computer Science, Shanghai Jiao Tong University, Shanghai Innovation Institute
Jingyuan Huang
Jingyuan Huang
Rutgers Univeristy
LLM AgentsRecommender SystemsGraph Mining
Linghe Kong
Linghe Kong
Shanghai Jiao Tong University
Internet of ThingsMobile computingBig data
L
Lichao Sun
Lehigh University
W
Weiran Huang
School of Computer Science, Shanghai Jiao Tong University, Shanghai Innovation Institute, State Key Laboratory of General Artificial Intelligence, BIGAI