Towards Self-Refinement of Vision-Language Models with Triangular Consistency

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the intrinsic self-refinement capability of vision-language models (VLMs) under unsupervised instruction data. To overcome the limitations of existing approaches—namely, their reliance on human annotations or external feedback—we propose a self-refinement framework grounded in triangular consistency: given an image-query-answer triplet, the model reconstructs each component from the other two, and low-quality samples are filtered based on reconstruction fidelity. Theoretically, we analyze this mechanism from a causal learning perspective; technically, we integrate multi-task instruction tuning with synthetic data training, enabling end-to-end self-updating within the LLaVA-1.5 architecture. Experiments demonstrate consistent performance gains across multiple benchmarks—without any human annotation or external supervision—providing the first empirical validation of VLMs’ intrinsic self-optimization capability. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) integrate visual knowledge with the analytical capabilities of Large Language Models (LLMs) through supervised visual instruction tuning, using image-question-answer triplets. However, the potential of VLMs trained without supervised instruction remains largely unexplored. This study validates that VLMs possess inherent self-refinement capabilities, enabling them to generate high-quality supervised data without external inputs and thereby learn autonomously. Specifically, to stimulate the self-refinement ability of VLMs, we propose a self-refinement framework based on a Triangular Consistency principle: within the image-query-answer triangle, any masked elements should be consistently and accurately reconstructed. The framework involves three steps: (1) We enable the instruction generation ability of VLMs by adding multi-task instruction tuning like image$ ightarrow$question-answer or image-answer$ ightarrow$question. (2) We generate image-query-answer triplets from unlabeled images and use the Triangular Consistency principle for filtering. (3) The model is further updated using the filtered synthetic data. To investigate the underlying mechanisms behind this self-refinement capability, we conduct a theoretical analysis from a causal perspective. Using the widely recognized LLaVA-1.5 as our baseline, our experiments reveal that the model can autonomously achieve consistent, though deliberately modest, improvements across multiple benchmarks without any external supervision, such as human annotations or environmental feedback. We expect that the insights of this study on the self-refinement ability of VLMs can inspire future research on the learning mechanism of VLMs. Code is available at https://github.com/dengyl20/SRF-LLaVA-1.5.
Problem

Research questions and friction points this paper is trying to address.

Developing self-refinement framework for vision-language models
Enabling autonomous learning without external supervision
Generating high-quality data using triangular consistency principle
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-refinement framework using triangular consistency principle
Generating image-query-answer triplets from unlabeled images
Filtering synthetic data through masked element reconstruction
🔎 Similar Papers
No similar papers found.
Y
Yunlong Deng
Mohamed bin Zayed University of Artificial Intelligence
G
Guangyi Chen
Mohamed bin Zayed University of Artificial Intelligence, Carnegie Mellon University
Tianpei Gu
Tianpei Gu
Research Scientist, ByteDance/TikTok
Computer VisionGenerative Model
Lingjing Kong
Lingjing Kong
Carnegie Mellon University
Machine Learning
Y
Yan Li
Mohamed bin Zayed University of Artificial Intelligence
Zeyu Tang
Zeyu Tang
Postdoctoral Scholar, Stanford University
Trustworthy AICausalityComputational Justice
K
Kun Zhang
Mohamed bin Zayed University of Artificial Intelligence, Carnegie Mellon University