Embedding Perturbation may Better Reflect the Uncertainty in LLM Reasoning

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of unreliable outputs in large language models during reasoning, where existing uncertainty quantification methods struggle to effectively capture uncertainty in intermediate reasoning steps. The authors propose a sensitivity-based uncertainty metric grounded in embedding perturbation: by applying small perturbations to the embeddings of preceding tokens and measuring the resulting changes in subsequent token predictions, the method quantifies uncertainty in intermediate reasoning without requiring multiple sampling passes. This approach is computationally efficient and demonstrates superior accuracy in identifying erroneous reasoning steps. Experimental results show that the proposed metric outperforms baseline methods—such as token probability and entropy—in both discriminative power for intermediate uncertainty and computational efficiency.

Technology Category

Application Category

📝 Abstract
Large language Models (LLMs) have achieved significant breakthroughs across diverse domains; however, they can still produce unreliable or misleading outputs. For responsible LLM application, Uncertainty Quantification (UQ) techniques are used to estimate a model's uncertainty about its outputs, indicating the likelihood that those outputs may be problematic. For LLM reasoning tasks, it is essential to estimate the uncertainty not only for the final answer, but also for the intermediate steps of the reasoning, as this can enable more fine-grained and targeted interventions. In this study, we explore what UQ metrics better reflect the LLM's ``intermediate uncertainty''during reasoning. Our study reveals that an LLMs'incorrect reasoning steps tend to contain tokens which are highly sensitive to the perturbations on the preceding token embeddings. In this way, incorrect (uncertain) intermediate steps can be readily identified using this sensitivity score as guidance in practice. In our experiments, we show such perturbation-based metric achieves stronger uncertainty quantification performance compared with baseline methods such as token (generation) probability and token entropy. Besides, different from approaches that rely on multiple sampling, the perturbation-based metrics offer better simplicity and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Uncertainty Quantification
Large Language Models
Reasoning
Intermediate Steps
Embedding Perturbation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding Perturbation
Uncertainty Quantification
LLM Reasoning
Intermediate Uncertainty
Sensitivity Score
🔎 Similar Papers
No similar papers found.