Do LLMs estimate uncertainty well in instruction-following?

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing uncertainty estimation methods for large language models (LLMs) in instruction-following tasks suffer from poor robustness against subtle instruction errors and inadequate calibration. Method: We propose the first instruction-following–specific uncertainty evaluation framework, featuring a controllable dual-version benchmark (Clean/Misleading) that disentangles semantic deviations from superficial perturbations, enabling fair cross-model and cross-method comparison; we further analyze internal states—including logits, attention weights, and intermediate activations—to probe uncertainty sources. Contribution/Results: Extensive experiments reveal that mainstream uncertainty measures—confidence, entropy, and ensemble-based scores—exhibit severe robustness deficiencies. While internal-state–based calibration improves reliability marginally, it remains substantially ineffective under semantically flawed instructions. Our work establishes a reproducible, attribution-aware evaluation paradigm for trustworthy AI agents, advancing principled uncertainty quantification in instruction-driven LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) could be valuable personal AI agents across various domains, provided they can precisely follow user instructions. However, recent studies have shown significant limitations in LLMs' instruction-following capabilities, raising concerns about their reliability in high-stakes applications. Accurately estimating LLMs' uncertainty in adhering to instructions is critical to mitigating deployment risks. We present, to our knowledge, the first systematic evaluation of the uncertainty estimation abilities of LLMs in the context of instruction-following. Our study identifies key challenges with existing instruction-following benchmarks, where multiple factors are entangled with uncertainty stems from instruction-following, complicating the isolation and comparison across methods and models. To address these issues, we introduce a controlled evaluation setup with two benchmark versions of data, enabling a comprehensive comparison of uncertainty estimation methods under various conditions. Our findings show that existing uncertainty methods struggle, particularly when models make subtle errors in instruction following. While internal model states provide some improvement, they remain inadequate in more complex scenarios. The insights from our controlled evaluation setups provide a crucial understanding of LLMs' limitations and potential for uncertainty estimation in instruction-following tasks, paving the way for more trustworthy AI agents.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' uncertainty estimation in instruction-following tasks
Identifying limitations in current instruction-following benchmarks and methods
Developing controlled evaluation for reliable uncertainty estimation in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic evaluation of LLMs' uncertainty estimation
Controlled benchmark for comparing uncertainty methods
Analysis of internal states for error detection
🔎 Similar Papers
2024-05-10International Conference on Machine LearningCitations: 1