Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks

📅 2024-10-05
🏛️ arXiv.org
📈 Citations: 3
Influential: 2
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) frequently produce erroneous responses in multimodal reasoning, and existing work has not systematically investigated their self-correction capability. Method: This paper introduces self-correction as a learnable evolutionary process of reasoning—rather than a one-time output refinement—and proposes the Self-Correction Learning (SCL) framework. During inference, SCL generates preference and non-preference response pairs via two-stage generation; during fine-tuning, it applies Direct Preference Optimization (DPO) without external feedback to enable end-to-end improvement of self-correction ability. Contribution/Results: Experiments demonstrate that SCL significantly boosts VLM accuracy on multimodal reasoning benchmarks, enabling direct generation of high-quality responses and substantially reducing reliance on post-hoc correction. The results reveal that preference optimization plays a pivotal role in evolving the intrinsic reasoning capabilities of VLMs.

Technology Category

Application Category

📝 Abstract
While Vision-Language Models (VLMs) have shown remarkable abilities in visual and language reasoning tasks, they invariably generate flawed responses. Self-correction that instructs models to refine their outputs presents a promising solution to this issue. Previous studies have mainly concentrated on Large Language Models (LLMs), while the self-correction abilities of VLMs, particularly concerning both visual and linguistic information, remain largely unexamined. This study investigates the self-correction capabilities of VLMs during both inference and fine-tuning stages. We introduce a Self-Correction Learning (SCL) approach that enables VLMs to learn from their self-generated self-correction data through Direct Preference Optimization (DPO) without relying on external feedback, facilitating self-improvement. Specifically, we collect preferred and disfavored samples based on the correctness of initial and refined responses, which are obtained by two-turn self-correction with VLMs during the inference stage. Experimental results demonstrate that although VLMs struggle to self-correct effectively during iterative inference without additional fine-tuning and external feedback, they can enhance their performance and avoid previous mistakes through preference fine-tuning when their self-generated self-correction data are categorized into preferred and disfavored samples. This study emphasizes that self-correction is not merely a refinement process; rather, it should enhance the reasoning abilities of models through additional training, enabling them to generate high-quality responses directly without further refinement.
Problem

Research questions and friction points this paper is trying to address.

Investigates self-correction in Vision-Language Models (VLMs) for visual and language reasoning tasks
Proposes Self-Correction Learning (SCL) using self-generated data without external feedback
Demonstrates VLMs improve performance via preference fine-tuning on categorized self-correction data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Correction Learning framework for VLMs
Direct Preference Optimization without external feedback
Fine-tuning with self-generated correction data