🤖 AI Summary
This work investigates the impact mechanisms of hallucination, erroneous responses, and low-quality OCR—collectively termed “contamination”—on multimodal large language models (MLLMs) during visual instruction tuning (VIT). We find that contamination-induced degradation is superficial, primarily affecting output-layer parameters; thus, freezing lower-layer parameters or fine-tuning with merely 1% clean data suffices to restore over 95% of original performance. Building on this insight, we propose the first external-label-free self-verifying data cleaning framework: it identifies contaminated samples via parameter plasticity analysis, then integrates self-supervised confidence estimation with contamination-aware lightweight post-training, forming a two-stage robust debiasing paradigm. Our method significantly outperforms existing approaches across multiple VIT benchmarks and enables end-to-end automatic data cleaning.
📝 Abstract
Visual Instruction Tuning (VIT) enhances Multimodal Large Language Models (MLLMs) but it is hindered by corrupted datasets containing hallucinated content, incorrect responses, and poor OCR quality. While prior works focus on dataset refinement through high-quality data collection or rule-based filtering, they are costly or limited to specific types of corruption. To deeply understand how corrupted data affects MLLMs, in this paper, we systematically investigate this issue and find that while corrupted data degrades the performance of MLLMs, its effects are largely superficial in that the performance of MLLMs can be largely restored by either disabling a small subset of parameters or post-training with a small amount of clean data. Additionally, corrupted MLLMs exhibit improved ability to distinguish clean samples from corrupted ones, enabling the dataset cleaning without external help. Based on those insights, we propose a corruption-robust training paradigm combining self-validation and post-training, which significantly outperforms existing corruption mitigation strategies.