🤖 AI Summary
In sign language translation (SLT), gloss-free models often generate hallucinated text decoupled from visual evidence due to excessive reliance on linguistic priors. This work addresses the problem by modeling hallucination through **visual input dependency**, proposing the first reference-free, interpretable reliability assessment method. Our approach quantifies the decoder’s reliance on visual signals at both token- and sentence-levels via feature sensitivity analysis and counterfactual video masking perturbations. It further integrates textual signals—including confidence and perplexity—to enable fine-grained, grounding-aware evaluation across datasets and architectures. Experiments on PHOENIX-2014T and CSL-Daily demonstrate that our reliability scores exhibit strong negative correlation with hallucination rates (r < −0.85) and significantly improve hallucination detection accuracy in unlabeled scenarios.
📝 Abstract
Hallucination, where models generate fluent text unsupported by visual evidence, remains a major flaw in vision-language models and is particularly critical in sign language translation (SLT). In SLT, meaning depends on precise grounding in video, and gloss-free models are especially vulnerable because they map continuous signer movements directly into natural language without intermediate gloss supervision that serves as alignment. We argue that hallucinations arise when models rely on language priors rather than visual input. To capture this, we propose a token-level reliability measure that quantifies how much the decoder uses visual information. Our method combines feature-based sensitivity, which measures internal changes when video is masked, with counterfactual signals, which capture probability differences between clean and altered video inputs. These signals are aggregated into a sentence-level reliability score, providing a compact and interpretable measure of visual grounding. We evaluate the proposed measure on two SLT benchmarks (PHOENIX-2014T and CSL-Daily) with both gloss-based and gloss-free models. Our results show that reliability predicts hallucination rates, generalizes across datasets and architectures, and decreases under visual degradations. Beyond these quantitative trends, we also find that reliability distinguishes grounded tokens from guessed ones, allowing risk estimation without references; when combined with text-based signals (confidence, perplexity, or entropy), it further improves hallucination risk estimation. Qualitative analysis highlights why gloss-free models are more susceptible to hallucinations. Taken together, our findings establish reliability as a practical and reusable tool for diagnosing hallucinations in SLT, and lay the groundwork for more robust hallucination detection in multimodal generation.