Position: Uncertainty Quantification Needs Reassessment for Large-language Model Agents

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a fundamental limitation of the conventional aleatoric/epistemic dichotomy for uncertainty quantification in large language model (LLM) agents operating within open, interactive human–AI dialogue. Such settings involve complex, context-dependent uncertainties that this binary framework fails to capture. To address this modeling inadequacy, the paper presents the first systematic critique of the paradigm and introduces three novel uncertainty categories tailored to interactive scenarios: task underspecification, interactive learning dynamics, and linguistic output uncertainty. Methodologically, it integrates conceptual analysis, interactive AI architecture design, semantic uncertainty modeling, and human–AI collaboration theory—advocating natural-language-based uncertainty expression over scalar metrics. The work establishes a new foundational paradigm for uncertainty research in LLM agents, providing both theoretical grounding and principled design guidelines for developing transparent, trustworthy, and interpretable next-generation intelligent agents.

Technology Category

Application Category

📝 Abstract
Large-language models (LLMs) and chatbot agents are known to provide wrong outputs at times, and it was recently found that this can never be fully prevented. Hence, uncertainty quantification plays a crucial role, aiming to quantify the level of ambiguity in either one overall number or two numbers for aleatoric and epistemic uncertainty. This position paper argues that this traditional dichotomy of uncertainties is too limited for the open and interactive setup that LLM agents operate in when communicating with a user, and that we need to research avenues that enrich uncertainties in this novel scenario. We review the literature and find that popular definitions of aleatoric and epistemic uncertainties directly contradict each other and lose their meaning in interactive LLM agent settings. Hence, we propose three novel research directions that focus on uncertainties in such human-computer interactions: Underspecification uncertainties, for when users do not provide all information or define the exact task at the first go, interactive learning, to ask follow-up questions and reduce the uncertainty about the current context, and output uncertainties, to utilize the rich language and speech space to express uncertainties as more than mere numbers. We expect that these new ways of dealing with and communicating uncertainties will lead to LLM agent interactions that are more transparent, trustworthy, and intuitive.
Problem

Research questions and friction points this paper is trying to address.

Reassessing uncertainty quantification for interactive LLM agents
Addressing contradictions in aleatoric and epistemic uncertainty definitions
Proposing new uncertainty types for human-computer interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Underspecification uncertainties for incomplete user inputs
Interactive learning to reduce context uncertainty
Output uncertainties beyond numerical expressions
🔎 Similar Papers
No similar papers found.