Scaling Test-Time Robustness of Vision-Language Models via Self-Critical Inference Framework

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the language bias and sensitivity commonly exhibited by large vision-language models during inference, often stemming from overreliance on linguistic priors. To mitigate this issue, the authors propose a Self-Critical Inference (SCI) framework that introduces multi-round counterfactual reasoning into vision-language models for the first time. SCI dynamically refines model predictions by alternately applying textual and visual perturbations coupled with visual contrastive decoding. Additionally, the study constructs DRBench, the first model-customized, dynamic robustness benchmark, enabling fine-grained evaluation of model behavior under perturbations. Experimental results demonstrate that SCI significantly outperforms existing methods on DRBench, and increasing the number of inference rounds further enhances robustness, surpassing single-step counterfactual reasoning strategies.

Technology Category

Application Category

📝 Abstract
The emergence of Large Language Models (LLMs) has driven rapid progress in multi-modal learning, particularly in the development of Large Vision-Language Models (LVLMs). However, existing LVLM training paradigms place excessive reliance on the LLM component, giving rise to two critical robustness challenges: language bias and language sensitivity. To address both issues simultaneously, we propose a novel Self-Critical Inference (SCI) framework that extends Visual Contrastive Decoding by conducting multi-round counterfactual reasoning through both textual and visual perturbations. This process further introduces a new strategy for improving robustness by scaling the number of counterfactual rounds. Moreover, we also observe that failure cases of LVLMs differ significantly across models, indicating that fixed robustness benchmarks may not be able to capture the true reliability of LVLMs. To this end, we propose the Dynamic Robustness Benchmark (DRBench), a model-specific evaluation framework targeting both language bias and sensitivity issues. Extensive experiments show that SCI consistently outperforms baseline methods on DRBench, and that increasing the number of inference rounds further boosts robustness beyond existing single-step counterfactual reasoning methods.
Problem

Research questions and friction points this paper is trying to address.

language bias
language sensitivity
vision-language models
test-time robustness
robustness evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Critical Inference
Visual Contrastive Decoding
Counterfactual Reasoning
Dynamic Robustness Benchmark
Test-Time Robustness
🔎 Similar Papers
No similar papers found.
Kaihua Tang
Kaihua Tang
Nanyang Technological University
Computer VisionMachine LearningArtificial Intelligence
J
Jiaxin Qi
Computer Network Information Center, CAS, China
J
Jinli Ou
University of Chinese Academy of Sciences, China
Y
Yuhua Zheng
HIAS, University of Chinese Academy of Sciences, China
Jianqiang Huang
Jianqiang Huang
Nanyang Technological University, Chinese Academy of Sciences
Compter VisionMachine LearningCasuality