🤖 AI Summary
This work investigates the intrinsic self-refinement capability of vision-language models (VLMs) under unsupervised instruction data. To overcome the limitations of existing approaches—namely, their reliance on human annotations or external feedback—we propose a self-refinement framework grounded in triangular consistency: given an image-query-answer triplet, the model reconstructs each component from the other two, and low-quality samples are filtered based on reconstruction fidelity. Theoretically, we analyze this mechanism from a causal learning perspective; technically, we integrate multi-task instruction tuning with synthetic data training, enabling end-to-end self-updating within the LLaVA-1.5 architecture. Experiments demonstrate consistent performance gains across multiple benchmarks—without any human annotation or external supervision—providing the first empirical validation of VLMs’ intrinsic self-optimization capability. The implementation is publicly available.
📝 Abstract
Vision-Language Models (VLMs) integrate visual knowledge with the analytical capabilities of Large Language Models (LLMs) through supervised visual instruction tuning, using image-question-answer triplets. However, the potential of VLMs trained without supervised instruction remains largely unexplored. This study validates that VLMs possess inherent self-refinement capabilities, enabling them to generate high-quality supervised data without external inputs and thereby learn autonomously. Specifically, to stimulate the self-refinement ability of VLMs, we propose a self-refinement framework based on a Triangular Consistency principle: within the image-query-answer triangle, any masked elements should be consistently and accurately reconstructed. The framework involves three steps: (1) We enable the instruction generation ability of VLMs by adding multi-task instruction tuning like image$
ightarrow$question-answer or image-answer$
ightarrow$question. (2) We generate image-query-answer triplets from unlabeled images and use the Triangular Consistency principle for filtering. (3) The model is further updated using the filtered synthetic data. To investigate the underlying mechanisms behind this self-refinement capability, we conduct a theoretical analysis from a causal perspective. Using the widely recognized LLaVA-1.5 as our baseline, our experiments reveal that the model can autonomously achieve consistent, though deliberately modest, improvements across multiple benchmarks without any external supervision, such as human annotations or environmental feedback. We expect that the insights of this study on the self-refinement ability of VLMs can inspire future research on the learning mechanism of VLMs. Code is available at https://github.com/dengyl20/SRF-LLaVA-1.5.