Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models

๐Ÿ“… Unknown Date
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the reliability deficiency of large language models (LLMs) in high-stakes domains (e.g., healthcare) by quantifying the relationship between response uncertainty and input prompt informativeness. We propose a โ€œpromptโ€“responseโ€ conceptual model that formalizes response uncertainty as epistemic uncertainty and establishes an interpretable, decaying functional relationship between prompt informativeness and uncertainty. Methodologically, we integrate pretrained implicit learning of latent conceptual reasoning, information-theoretic measures, and an empirical analytical framework, validated systematically across multiple real-world datasets spanning diverse domains. Experiments demonstrate that increasing prompt informativeness significantly reduces response uncertainty. Our approach enables quantifiable, attribution-aware confidence estimation for high-risk applications, thereby enhancing the interpretability and controllability of LLM deployment in safety-critical settings.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established. Therefore, understanding how LLMs reason and make decisions is crucial for their safe deployment. This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt. Leveraging the insight that LLMs learn to infer latent concepts during pretraining, we propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty. We show that the uncertainty decreases as the prompt's informativeness increases, similar to epistemic uncertainty. Our detailed experimental results on real-world datasets validate our proposed model.
Problem

Research questions and friction points this paper is trying to address.

Prompts affect response uncertainty
Understanding LLMs' decision-making process
Improving reliability in critical tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt-response concept model
Decreases uncertainty with informativeness
Validated on real-world datasets
๐Ÿ”Ž Similar Papers
No similar papers found.