Bridging Writing Manner Gap in Visual Instruction Tuning by Creating LLM-aligned Instructions

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a significant stylistic mismatch between visual instructions and the underlying large language model (LLM) in vision-instruction tuning of large multimodal models (LMMs), leading to degraded performance and increased hallucination. To address this, we propose the first quantitative framework for evaluating visual instruction quality along the “writing style” dimension—specifically measuring lexical, syntactic, and structural alignment with the base LLM (e.g., LLaMA, Qwen). Our method introduces an LLM-based self-alignment generation mechanism that rewrites visual instructions while preserving semantic meaning, augmented by soft-format modeling and human evaluation. Experiments across 15 cross-modal benchmarks demonstrate that our approach substantially reduces stylistic divergence and consistently outperforms strong baselines (LLaVA-7B, QwenVL), achieving marked reductions in hallucination rates.

Technology Category

Application Category

📝 Abstract
In the realm of Large Multi-modal Models (LMMs), the instruction quality during the visual instruction tuning stage significantly influences the performance of modality alignment. In this paper, we assess the instruction quality from a unique perspective termed extbf{Writing Manner}, which encompasses the selection of vocabulary, grammar and sentence structure to convey specific semantics. We argue that there exists a substantial writing manner gap between the visual instructions and the base Large Language Models (LLMs) within LMMs. This gap forces the pre-trained base LLMs to deviate from their original writing styles, leading to capability degradation of both base LLMs and LMMs. To bridge the writing manner gap while preserving the original semantics, we propose directly leveraging the base LLM to align the writing manner of soft-format visual instructions with that of the base LLM itself, resulting in novel LLM-aligned instructions. The manual writing manner evaluation results demonstrate that our approach successfully minimizes the writing manner gap. By utilizing LLM-aligned instructions, the baseline models LLaVA-7B and QwenVL demonstrate enhanced resistance to hallucinations and non-trivial comprehensive improvements across all $15$ visual and language benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Addressing writing manner gap in visual instruction tuning
Aligning visual instructions with base LLM writing styles
Improving LMM performance by reducing hallucination and benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns visual instructions with LLM writing styles
Reduces writing manner gap in LMMs
Improves model performance across benchmarks
🔎 Similar Papers
No similar papers found.