🤖 AI Summary
This work investigates whether System-2–style long-chain chain-of-thought (CoT) reasoning can surpass System-1 intuitive inference in perceptual tasks. To this end, we construct the first large-scale, synthetically generated visual-reasoning dataset comprising 30K samples with verifiable long CoTs. Our method introduces a three-stage controllable synthesis framework: (1) generating image-grounded, verifiable multiple-choice questions from image descriptions; (2) extracting initial CoTs from vision-language models (VLMs); and (3) expanding them into deep, logically verifiable long chains using advanced reasoning models. The approach integrates multimodal large models, instruction fine-tuning, dense image parsing, and controllable CoT expansion. Evaluated on five visual benchmarks, our method achieves an average +3.4-point improvement, including +11.8 on V* Bench. Notably, it also yields a +2.0 gain on the text-based MMLU-Pro benchmark—marking the first systematic demonstration of cross-modal generalization and transfer benefits of long-chain reasoning.
📝 Abstract
Recent reasoning models through test-time scaling have demonstrated that long chain-of-thoughts can unlock substantial performance boosts in hard reasoning tasks such as math and code. However, the benefit of such long thoughts for system-2 reasoning is relatively less explored in other domains such as perceptual tasks where shallower, system-1 reasoning seems sufficient. In this paper, we introduce LongPerceptualThoughts, a new synthetic dataset with 30K long-thought traces for perceptual tasks. The key challenges in synthesizing elaborate reasoning thoughts for perceptual tasks are that off-the-shelf models are not yet equipped with such thinking behavior and that it is not straightforward to build a reliable process verifier for perceptual tasks. Thus, we propose a novel three-stage data synthesis framework that first synthesizes verifiable multiple-choice questions from dense image descriptions, then extracts simple CoTs from VLMs for those verifiable problems, and finally expands those simple thoughts to elaborate long thoughts via frontier reasoning models. In controlled experiments with a strong instruction-tuned 7B model, we demonstrate notable improvements over existing visual reasoning data-generation methods. Our model, trained on the generated dataset, achieves an average +3.4 points improvement over 5 vision-centric benchmarks, including +11.8 points on V$^*$ Bench. Notably, despite being tuned for vision tasks, it also improves performance on the text reasoning benchmark, MMLU-Pro, by +2 points.