Semantic Token Clustering for Efficient Uncertainty Quantification in Large Language Models

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of unreliable outputs from large language models and the high computational cost of existing uncertainty quantification methods. To this end, the authors propose an efficient approach that leverages semantic information from internal token embeddings for clustering. By integrating prefix matching with aggregated probability mass over semantic clusters, the method achieves accurate uncertainty estimation with only a single model pass and without requiring any auxiliary models. Evaluated across multiple benchmarks, the proposed technique matches the performance of state-of-the-art uncertainty quantification methods while significantly reducing computational overhead, thereby offering a practical balance between efficiency and accuracy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks. However, the truthfulness of their outputs is not guaranteed, and their tendency toward overconfidence further limits reliability. Uncertainty quantification offers a promising way to identify potentially unreliable outputs, but most existing methods rely on repeated sampling or auxiliary models, introducing substantial computational overhead. To address these limitations, we propose Semantic Token Clustering (STC), an efficient uncertainty quantification method that leverages the semantic information inherently encoded in LLMs. Specifically, we group tokens into semantically consistent clusters using embedding clustering and prefix matching, and quantify uncertainty based on the probability mass aggregated over the corresponding semantic cluster. Our approach requires only a single generation and does not depend on auxiliary models. Experimental results show that STC achieves performance comparable to state-of-the-art baselines while substantially reducing computational overhead.
Problem

Research questions and friction points this paper is trying to address.

uncertainty quantification
large language models
computational overhead
overconfidence
reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Token Clustering
Uncertainty Quantification
Large Language Models
Efficient Inference
Embedding Clustering
Q
Qi Cao
The University of Tokyo, Japan
A
Andrew Gambardella
The University of Tokyo, Japan
T
Takeshi Kojima
The University of Tokyo, Japan
Yutaka Matsuo
Yutaka Matsuo
Professor, University of Tokyo
deep learningweb miningartificial intelligence
Yusuke Iwasawa
Yusuke Iwasawa
The University of Tokyo
deep learningtransfer learningfoundation modelmeta learning