See, Think, Learn: A Self-Taught Multimodal Reasoner

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) suffer from two interrelated bottlenecks in multimodal reasoning: inaccurate perception and brittle inference. Mainstream enhancement approaches rely on costly human annotations, proprietary models, or perception-agnostic self-training, limiting generalizability and practicality. To address this, we propose See-Think-Learn, a novel self-training framework that enables *co-evolution* of perception and reasoning for the first time. It leverages visual attribute extraction to guide structured chain-of-thought generation, integrates negative rationale mining, and employs template-based reasoning distillation—all without any human supervision. Experiments across diverse multimodal reasoning benchmarks demonstrate substantial improvements over strong baselines, achieving superior discriminability, robustness, and interpretability. Our approach establishes a new paradigm for low-cost, scalable multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have achieved remarkable progress in integrating visual perception with language understanding. However, effective multimodal reasoning requires both accurate perception and robust reasoning, and weakness in either limits the performance of VLMs. Prior efforts to enhance reasoning often depend on high-quality chain-of-thought (CoT) data, obtained via labor-intensive human annotations, costly proprietary models, or self-training methods that overlook perception. To address these limitations, we propose a simple yet effective self-training framework called See-Think-Learn (STL). At its core, STL introduces a structured reasoning template that encourages the model to see before thinking, first extracting visual attributes in textual form, then using them to guide reasoning. The framework jointly improves perception and reasoning by having the model generate and learn from its own structured rationales in a self-training loop. Furthermore, we augment the training data with negative rationales, i.e. explanations that justify why certain answer choices are incorrect, to enhance the model's ability to distinguish between correct and misleading responses. This fosters more discriminative and robust learning. Experiments across diverse domains show that STL consistently outperforms baselines trained directly only on answers or self-generated reasoning, while qualitative analysis confirms the high quality of its rationales. STL thus provides a cost-effective solution to enhance multimodal reasoning ability of VLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhances multimodal reasoning without costly human annotations
Improves both perception and reasoning through self-generated rationales
Uses negative rationales to distinguish correct from misleading answers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-training framework with structured reasoning template
Generates and learns from its own structured rationales
Augments training with negative rationales for discriminative learning
🔎 Similar Papers
No similar papers found.
S
Sourabh Sharma
Malaviya National Institute of Technology Jaipur
Sonam Gupta
Sonam Gupta
Research Scientist, IBM Research
Computer VisionVideo GenerationVideo UnderstandingNatural Language Processing
S
Sadbhawna
Malaviya National Institute of Technology Jaipur