🤖 AI Summary
Current vision-language models (VLMs) suffer from two interrelated bottlenecks in multimodal reasoning: inaccurate perception and brittle inference. Mainstream enhancement approaches rely on costly human annotations, proprietary models, or perception-agnostic self-training, limiting generalizability and practicality. To address this, we propose See-Think-Learn, a novel self-training framework that enables *co-evolution* of perception and reasoning for the first time. It leverages visual attribute extraction to guide structured chain-of-thought generation, integrates negative rationale mining, and employs template-based reasoning distillation—all without any human supervision. Experiments across diverse multimodal reasoning benchmarks demonstrate substantial improvements over strong baselines, achieving superior discriminability, robustness, and interpretability. Our approach establishes a new paradigm for low-cost, scalable multimodal reasoning.
📝 Abstract
Vision-Language Models (VLMs) have achieved remarkable progress in integrating visual perception with language understanding. However, effective multimodal reasoning requires both accurate perception and robust reasoning, and weakness in either limits the performance of VLMs. Prior efforts to enhance reasoning often depend on high-quality chain-of-thought (CoT) data, obtained via labor-intensive human annotations, costly proprietary models, or self-training methods that overlook perception. To address these limitations, we propose a simple yet effective self-training framework called See-Think-Learn (STL). At its core, STL introduces a structured reasoning template that encourages the model to see before thinking, first extracting visual attributes in textual form, then using them to guide reasoning. The framework jointly improves perception and reasoning by having the model generate and learn from its own structured rationales in a self-training loop. Furthermore, we augment the training data with negative rationales, i.e. explanations that justify why certain answer choices are incorrect, to enhance the model's ability to distinguish between correct and misleading responses. This fosters more discriminative and robust learning. Experiments across diverse domains show that STL consistently outperforms baselines trained directly only on answers or self-generated reasoning, while qualitative analysis confirms the high quality of its rationales. STL thus provides a cost-effective solution to enhance multimodal reasoning ability of VLMs.