🤖 AI Summary
This work identifies insufficient perceptual capability—not reasoning—as the primary bottleneck of multimodal large language models (MLLMs) in STEM visual reasoning. To address this, we propose using executable code as a perceptual intermediary, introducing ICC-1M, the first image-caption-code triplet dataset, and developing code-guided caption generation and image-to-code translation methods tailored for STEM visuals to systematically enhance perception. We further establish STEM2Code-Eval, the first perception evaluation benchmark based on code generation. Experiments demonstrate that our approach significantly improves MLLMs’ perceptual performance on STEM tasks, validating that strengthening perception is more effective than merely scaling reasoning capabilities, thereby opening a new pathway for MLLM applications in STEM domains.
📝 Abstract
When MLLMs fail at Science, Technology, Engineering, and Mathematics (STEM) visual reasoning, a fundamental question arises: is it due to perceptual deficiencies or reasoning limitations? Through systematic scaling analysis that independently scales perception and reasoning components, we uncover a critical insight: scaling perception consistently outperforms scaling reasoning. This reveals perception as the true lever limiting current STEM visual reasoning. Motivated by this insight, our work focuses on systematically enhancing the perception capabilities of MLLMs by establishing code as a powerful perceptual medium--executable code provides precise semantics that naturally align with the structured nature of STEM visuals. Specifically, we construct ICC-1M, a large-scale dataset comprising 1M Image-Caption-Code triplets that materializes this code-as-perception paradigm through two complementary approaches: (1) Code-Grounded Caption Generation treats executable code as ground truth for image captions, eliminating the hallucinations inherent in existing knowledge distillation methods; (2) STEM Image-to-Code Translation prompts models to generate reconstruction code, mitigating the ambiguity of natural language for perception enhancement. To validate this paradigm, we further introduce STEM2Code-Eval, a novel benchmark that directly evaluates visual perception in STEM domains. Unlike existing work relying on problem-solving accuracy as a proxy that only measures problem-relevant understanding, our benchmark requires comprehensive visual comprehension through executable code generation for image reconstruction, providing deterministic and verifiable assessment. Code is available at https://github.com/TongkunGuan/Qwen-CodePercept.