🤖 AI Summary
Existing confidence estimation methods for large language models often reduce multi-step reasoning to a single scalar, neglecting the temporal evolution of confidence and rendering them susceptible to superficial factors such as response length. This limitation impedes their ability to distinguish between correctly reasoned answers and confidently asserted but incorrect ones. To address this, this work introduces Signal Temporal Logic (STL) into confidence calibration for the first time. By leveraging discriminative STL specifications, the approach identifies generalizable temporal patterns that differentiate correct from erroneous reasoning trajectories. Furthermore, it proposes a hypernetwork-driven dynamic STL modeling framework that adaptively adjusts confidence evolution rules based on contextual cues. Evaluated across multiple reasoning benchmarks, the method significantly improves the alignment between predicted confidence scores and actual accuracy, outperforming current state-of-the-art baselines.
📝 Abstract
Large Language Models (LLMs) increasingly rely on long-form, multi-step reasoning to solve complex tasks such as mathematical problem solving and scientific question answering. Despite strong performance, existing confidence estimation methods typically reduce an entire reasoning process to a single scalar score, ignoring how confidence evolves throughout the generation. As a result, these methods are often sensitive to superficial factors such as response length or verbosity, and struggle to distinguish correct reasoning from confidently stated errors. We propose to characterize the stepwise confidence signal using Signal Temporal Logic (STL). Using a discriminative STL mining procedure, we discover temporal formulas that distinguish confidence signals of correct and incorrect responses. Our analysis found that the STL patterns generalize across tasks, and numeric parameters exhibit sensitivity to individual questions. Based on these insights, we develop a confidence estimation approach that informs STL blocks with parameter hypernetworks. Experiments on multiple reasoning tasks show our confidence scores are more calibrated than the baselines.