🤖 AI Summary
This work addresses the language bias and sensitivity commonly exhibited by large vision-language models during inference, often stemming from overreliance on linguistic priors. To mitigate this issue, the authors propose a Self-Critical Inference (SCI) framework that introduces multi-round counterfactual reasoning into vision-language models for the first time. SCI dynamically refines model predictions by alternately applying textual and visual perturbations coupled with visual contrastive decoding. Additionally, the study constructs DRBench, the first model-customized, dynamic robustness benchmark, enabling fine-grained evaluation of model behavior under perturbations. Experimental results demonstrate that SCI significantly outperforms existing methods on DRBench, and increasing the number of inference rounds further enhances robustness, surpassing single-step counterfactual reasoning strategies.
📝 Abstract
The emergence of Large Language Models (LLMs) has driven rapid progress in multi-modal learning, particularly in the development of Large Vision-Language Models (LVLMs). However, existing LVLM training paradigms place excessive reliance on the LLM component, giving rise to two critical robustness challenges: language bias and language sensitivity. To address both issues simultaneously, we propose a novel Self-Critical Inference (SCI) framework that extends Visual Contrastive Decoding by conducting multi-round counterfactual reasoning through both textual and visual perturbations. This process further introduces a new strategy for improving robustness by scaling the number of counterfactual rounds. Moreover, we also observe that failure cases of LVLMs differ significantly across models, indicating that fixed robustness benchmarks may not be able to capture the true reliability of LVLMs. To this end, we propose the Dynamic Robustness Benchmark (DRBench), a model-specific evaluation framework targeting both language bias and sensitivity issues. Extensive experiments show that SCI consistently outperforms baseline methods on DRBench, and that increasing the number of inference rounds further boosts robustness beyond existing single-step counterfactual reasoning methods.