Causal Evidence that Language Models use Confidence to Drive Behavior

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) utilize internal confidence signals to govern their decision to respond or abstain. Employing a four-stage abstention paradigm combined with activation interventions, RAG retrieval scoring, and semantic feature analysis, the work provides the first causal evidence that LLMs primarily rely on endogenous confidence—rather than external cues—to guide abstention behavior, with effect sizes significantly exceeding those of alternative features. Furthermore, the models dynamically adjust their abstention thresholds in response to instructions, revealing a two-stage metacognitive control mechanism analogous to biological systems. These findings demonstrate that LLMs possess an active, confidence-based regulatory capacity for behavioral modulation, offering critical empirical support for understanding their cognitive-like decision-making processes.

Technology Category

Application Category

📝 Abstract
Metacognition -- the ability to assess one's own cognitive performance -- is documented across species, with internal confidence estimates serving as a key signal for adaptive behavior. While confidence can be extracted from Large Language Model (LLM) outputs, whether models actively use these signals to regulate behavior remains a fundamental question. We investigate this through a four-phase abstention paradigm.Phase 1 established internal confidence estimates in the absence of an abstention option. Phase 2 revealed that LLMs apply implicit thresholds to these estimates when deciding to answer or abstain. Confidence emerged as the dominant predictor of behavior, with effect sizes an order of magnitude larger than knowledge retrieval accessibility (RAG scores) or surface-level semantic features. Phase 3 provided causal evidence through activation steering: manipulating internal confidence signals correspondingly shifted abstention rates. Finally, Phase 4 demonstrated that models can systematically vary abstention policies based on instructed thresholds.Our findings indicate that abstention arises from the joint operation of internal confidence representations and threshold-based policies, mirroring the two-stage metacognitive control found in biological systems. This capacity is essential as LLMs transition into autonomous agents that must recognize their own uncertainty to decide when to act or seek help.
Problem

Research questions and friction points this paper is trying to address.

metacognition
confidence
large language models
abstention
behavior regulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

metacognition
confidence estimation
abstention behavior
activation steering
threshold-based policy
🔎 Similar Papers
2024-01-24Nature Machine IntelligenceCitations: 7