🤖 AI Summary
Large language models (LLMs) suffer from miscalibrated confidence—i.e., their output confidence scores often misalign with actual prediction reliability—while prior work remains inconclusive on whether prompting can effectively modulate confidence. This paper provides the first empirical demonstration that LLM confidence can be directionally and controllably adjusted via semantic-guided multi-prompt intervention. To this end, we propose SteeringConf, a framework that jointly integrates confidence steering, weighted aggregation, and semantic-consistency-driven answer filtering to form an end-to-end confidence calibration and failure detection system. Evaluated across seven benchmarks, SteeringConf significantly reduces Expected Calibration Error (ECE) and Brier Score, while improving failure detection accuracy by 12.6% on average. The method exhibits strong cross-model and cross-task robustness, consistently outperforming all baselines.
📝 Abstract
Large Language Models (LLMs) often exhibit misaligned confidence scores, usually overestimating the reliability of their predictions. While verbalized confidence in Large Language Models (LLMs) has gained attention, prior work remains divided on whether confidence scores can be systematically steered through prompting. Recent studies even argue that such prompt-induced confidence shifts are negligible, suggesting LLMs' confidence calibration is rigid to linguistic interventions. Contrary to these claims, we first rigorously confirm the existence of directional confidence shifts by probing three models (including GPT3.5, LLAMA3-70b, GPT4) across 7 benchmarks, demonstrating that explicit instructions can inflate or deflate confidence scores in a regulated manner. Based on this observation, we propose a novel framework containing three components: confidence steering, steered confidence aggregation and steered answers selection, named SteeringConf. Our method, SteeringConf, leverages a confidence manipulation mechanism to steer the confidence scores of LLMs in several desired directions, followed by a summarization module that aggregates the steered confidence scores to produce a final prediction. We evaluate our method on 7 benchmarks and it consistently outperforms the baselines in terms of calibration metrics in task of confidence calibration and failure detection.