🤖 AI Summary
This work addresses the challenge that multimodal large language models (MLLMs) often produce plausible yet incorrect outputs, a critical limitation exacerbated by the absence of a general, training-free mechanism for uncertainty quantification. To bridge this gap, we propose UMPIRE, a training-free framework that dynamically assesses output reliability by analyzing internal model features through two key signals: semantic volume and cross-modal response inconsistency. UMPIRE is the first method to enable universal uncertainty estimation across diverse modalities—including image-text, audio-text, and video-text—without relying on external tools or task-specific training, and it naturally extends to non-text-generation tasks. Experimental results demonstrate that UMPIRE significantly outperforms existing approaches in both error detection and uncertainty calibration.
📝 Abstract
Despite their capabilities, Multimodal Large Language Models (MLLMs) may produce plausible but erroneous outputs, hindering reliable deployment. Accurate uncertainty metrics could enable escalation of unreliable queries to human experts or larger models for improved performance. However, existing uncertainty metrics have practical constraints, such as being designed only for specific modalities, reliant on external tools, or computationally expensive. We introduce UMPIRE, a training-free uncertainty quantification framework for MLLMs that works efficiently across various input and output modalities without external tools, relying only on the models'own internal modality features. UMPIRE computes the incoherence-adjusted semantic volume of sampled MLLM responses for a given task instance, effectively capturing both the global semantic diversity of samples and the local incoherence of responses based on internal model confidence. We propose uncertainty desiderata for MLLMs and provide theoretical analysis motivating UMPIRE's design. Extensive experiments show that UMPIRE consistently outperforms baseline metrics in error detection and uncertainty calibration across image, audio, and video-text benchmarks, including adversarial and out-of-distribution settings. We also demonstrate UMPIRE's generalization to non-text output tasks, including image and audio generation.