Confidence over Time: Confidence Calibration with Temporal Logic for Large Language Model Reasoning

📅 2026-01-19
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing confidence estimation methods for large language models often reduce multi-step reasoning to a single scalar, neglecting the temporal evolution of confidence and rendering them susceptible to superficial factors such as response length. This limitation impedes their ability to distinguish between correctly reasoned answers and confidently asserted but incorrect ones. To address this, this work introduces Signal Temporal Logic (STL) into confidence calibration for the first time. By leveraging discriminative STL specifications, the approach identifies generalizable temporal patterns that differentiate correct from erroneous reasoning trajectories. Furthermore, it proposes a hypernetwork-driven dynamic STL modeling framework that adaptively adjusts confidence evolution rules based on contextual cues. Evaluated across multiple reasoning benchmarks, the method significantly improves the alignment between predicted confidence scores and actual accuracy, outperforming current state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) increasingly rely on long-form, multi-step reasoning to solve complex tasks such as mathematical problem solving and scientific question answering. Despite strong performance, existing confidence estimation methods typically reduce an entire reasoning process to a single scalar score, ignoring how confidence evolves throughout the generation. As a result, these methods are often sensitive to superficial factors such as response length or verbosity, and struggle to distinguish correct reasoning from confidently stated errors. We propose to characterize the stepwise confidence signal using Signal Temporal Logic (STL). Using a discriminative STL mining procedure, we discover temporal formulas that distinguish confidence signals of correct and incorrect responses. Our analysis found that the STL patterns generalize across tasks, and numeric parameters exhibit sensitivity to individual questions. Based on these insights, we develop a confidence estimation approach that informs STL blocks with parameter hypernetworks. Experiments on multiple reasoning tasks show our confidence scores are more calibrated than the baselines.
Problem

Research questions and friction points this paper is trying to address.

confidence calibration
temporal logic
large language models
reasoning
confidence estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Signal Temporal Logic
Confidence Calibration
Large Language Models
Stepwise Reasoning
Hypernetworks
🔎 Similar Papers
No similar papers found.
Zhenjiang Mao
Zhenjiang Mao
University of Florida
A
Anirudhh Venkat
University of Florida
A
Artem Bisliouk
University of Mannheim
A
Akshat Kothiyal
University of Florida
S
Sindhura Kumbakonam Subramanian
University of Florida
S
Saithej Singhu
University of Florida
Ivan Ruchkin
Ivan Ruchkin
Assistant Professor, Department of Electrical and Computer Engineering, University of Florida
Safe Autonomous SystemsCyber-Physical SystemsAssuranceVerificationMonitoring