🤖 AI Summary
Scene Text Visual Question Answering (STVQA) systems face safety-critical risks from OCR errors—e.g., misreading “50 mph” as “60 mph”—yet existing rejection methods rely on miscalibrated output probabilities or semantic consistency, failing to capture OCR-specific uncertainty. Method: We propose Latent Representation Probing (LRP), a novel rejection paradigm that exploits discriminative uncertainty signals embedded in the intermediate-layer representations and attention patterns of vision-language models (VLMs). LRP employs a lightweight probe—featuring cross-layer representation concatenation, visual-token attention aggregation, and ensemble voting—to directly estimate answer confidence from hidden states, without OCR post-processing or task-specific supervision. Contribution/Results: On four image- and video-based STVQA benchmarks, LRP achieves an average 7.6% improvement in rejection accuracy. It demonstrates superior generalization across datasets and diverse OCR error types, significantly enhancing robustness in safety-sensitive scenarios.
📝 Abstract
As VLMs are deployed in safety-critical applications, their ability to abstain from answering when uncertain becomes crucial for reliability, especially in Scene Text Visual Question Answering (STVQA) tasks. For example, OCR errors like misreading "50 mph" as "60 mph" could cause severe traffic accidents. This leads us to ask: Can VLMs know when they can't see? Existing abstention methods suggest pessimistic answers: they either rely on miscalibrated output probabilities or require semantic agreement unsuitable for OCR tasks. However, this failure may indicate we are looking in the wrong place: uncertainty signals could be hidden in VLMs' internal representations.
Building on this insight, we propose Latent Representation Probing (LRP): training lightweight probes on hidden states or attention patterns. We explore three probe designs: concatenating representations across all layers, aggregating attention over visual tokens, and ensembling single layer probes by majority vote. Experiments on four benchmarks across image and video modalities show LRP improves abstention accuracy by 7.6% over best baselines. Our analysis reveals: probes generalize across various uncertainty sources and datasets, and optimal signals emerge from intermediate rather than final layers. This establishes a principled framework for building deployment-ready AI systems by detecting confidence signals from internal states rather than unreliable outputs.