🤖 AI Summary
This work investigates whether large language models (LLMs) can naturally articulate their internal answer distributions—termed *reflexive uncertainty*—in natural language, rather than merely outputting scalar confidence scores. Method: We formally define reflexive uncertainty and introduce SelfReflect, a theory-driven evaluation framework integrating probabilistic distribution comparison, sampling-based uncertainty summarization, LLM-based discrimination, and embedding similarity benchmarking. Contribution/Results: Experiments reveal that state-of-the-art reasoning models fail to spontaneously express uncertainty accurately. SelfReflect substantially outperforms existing uncertainty quantification methods. Crucially, the sampling-plus-summarization paradigm generates high-fidelity, fine-grained, human-interpretable uncertainty descriptions—offering a novel pathway toward trustworthy AI.
📝 Abstract
To reveal when a large language model (LLM) is uncertain about a response, uncertainty quantification commonly produces percentage numbers along with the output. But is this all we can do? We argue that in the output space of LLMs, the space of strings, exist strings expressive enough to summarize the distribution over output strings the LLM deems possible. We lay a foundation for this new avenue of uncertainty explication and present SelfReflect, a theoretically-motivated metric to assess how faithfully a string summarizes an LLM's internal answer distribution. We show that SelfReflect is able to discriminate even subtle differences of candidate summary strings and that it aligns with human judgement, outperforming alternative metrics such as LLM judges and embedding comparisons. With SelfReflect, we investigate a number of self-summarization methods and find that even state-of-the-art reasoning models struggle to explicate their internal uncertainty. But we find that faithful summarizations can be generated by sampling and summarizing. Our metric enables future works towards this universal form of LLM uncertainties.