Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can autonomously detect their own generation errors—such as hallucinations or reasoning failures—solely from internal states (hidden activations and attention patterns), without external discriminators or multi-sample sampling. To this end, we propose Gnosis: an intrinsic self-verification framework featuring a frozen backbone, zero additional inference overhead, and sequence-length independence. Its core is a lightweight, plug-and-play module (adding only +5M parameters) that compresses hidden and attention trajectories to extract failure signals, enabling zero-shot, prefix-level failure prediction and computation-aware control. Evaluated on mathematical reasoning, open-domain question answering, and academic knowledge benchmarks, Gnosis significantly outperforms strong internal baselines and large external discriminators, while simultaneously improving both accuracy and calibration.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) generate fluent and complex outputs but often fail to recognize their own mistakes and hallucinations. Existing approaches typically rely on external judges, multi-sample consistency, or text-based self-critique, which incur additional compute or correlate weakly with true correctness. We ask: can LLMs predict their own failures by inspecting internal states during inference? We introduce Gnosis, a lightweight self-awareness mechanism that enables frozen LLMs to perform intrinsic self-verification by decoding signals from hidden states and attention patterns. Gnosis passively observes internal traces, compresses them into fixed-budget descriptors, and predicts correctness with negligible inference cost, adding only ~5M parameters and operating independently of sequence length. Across math reasoning, open-domain question answering, and academic knowledge benchmarks, and over frozen backbones ranging from 1.7B to 20B parameters, Gnosis consistently outperforms strong internal baselines and large external judges in both accuracy and calibration. Moreover, it generalizes zero-shot to partial generations, enabling early detection of failing trajectories and compute-aware control. These results show that reliable correctness cues are intrinsic to generation process and can be extracted efficiently without external supervision.
Problem

Research questions and friction points this paper is trying to address.

Predicting LLM failures via internal state inspection
Enabling self-verification with minimal added parameters
Detecting errors early without external supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight self-awareness via internal state decoding
Fixed-budget compression of hidden and attention traces
Zero-shot generalization for early failure detection
🔎 Similar Papers
No similar papers found.