Detecting Misbehaviors of Large Vision-Language Models by Evidential Uncertainty Quantification

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models are prone to uncontrolled behaviors—such as hallucination and jailbreaking—when exposed to incompetent or adversarial inputs, necessitating effective detection mechanisms. This work proposes Evidence Uncertainty Quantification (EUQ), the first approach to introduce Dempster–Shafer evidence theory into this domain. By modeling output-head features as supporting and opposing evidence, EUQ simultaneously captures both information conflict and epistemic uncertainty within a single forward pass. The method enables fine-grained identification of four types of failure modes: high conflict corresponds to hallucination, while high ignorance indicates out-of-distribution failure. EUQ substantially outperforms existing baselines and offers a novel perspective for analyzing the layer-wise dynamics of uncertainty in vision-language models.

Technology Category

Application Category

📝 Abstract
Large vision-language models (LVLMs) have shown substantial advances in multimodal understanding and generation. However, when presented with incompetent or adversarial inputs, they frequently produce unreliable or even harmful content, such as fact hallucinations or dangerous instructions. This misalignment with human expectations, referred to as \emph{misbehaviors} of LVLMs, raises serious concerns for deployment in critical applications. These misbehaviors are found to stem from epistemic uncertainty, specifically either conflicting internal knowledge or the absence of supporting information. However, existing uncertainty quantification methods, which typically capture only overall epistemic uncertainty, have shown limited effectiveness in identifying such issues. To address this gap, we propose Evidential Uncertainty Quantification (EUQ), a fine-grained method that captures both information conflict and ignorance for effective detection of LVLM misbehaviors. In particular, we interpret features from the model output head as either supporting (positive) or opposing (negative) evidence. Leveraging Evidence Theory, we model and aggregate this evidence to quantify internal conflict and knowledge gaps within a single forward pass. We extensively evaluate our method across four categories of misbehavior, including hallucinations, jailbreaks, adversarial vulnerabilities, and out-of-distribution (OOD) failures, using state-of-the-art LVLMs, and find that EUQ consistently outperforms strong baselines, showing that hallucinations correspond to high internal conflict and OOD failures to high ignorance. Furthermore, layer-wise evidential uncertainty dynamics analysis helps interpret the evolution of internal representations from a new perspective. The source code is available at https://github.com/HT86159/EUQ.
Problem

Research questions and friction points this paper is trying to address.

Large Vision-Language Models
Misbehaviors
Uncertainty Quantification
Hallucinations
Out-of-Distribution Failures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evidential Uncertainty Quantification
Large Vision-Language Models
Misbehavior Detection
Epistemic Uncertainty
Evidence Theory
🔎 Similar Papers
No similar papers found.
Tao Huang
Tao Huang
Beijing Jiaotong University
Rui Wang
Rui Wang
School of Automation and Intelligence, Beijing Jiaotong University
uncertainty estimationsoftware reliabilityadversarial robustnesssystem safetysafety case
X
Xiaofei Liu
State Key Laboratory of Advanced Rail Autonomous Operation, China; Beijing Key Laboratory of Traffic Data Mining and Embodied Intelligence, China; School of Computer Science and Technology, Beijing Jiaotong University, China
Yi Qin
Yi Qin
Chongqing University
signal processingfault diagnosisartificial intelligencemeasurement
L
Li Duan
Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, China
Liping Jing
Liping Jing
Beijing Jiaotong University
Machine LearningData Mining