Spilled Energy in Large Language Models

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models frequently generate factual errors, biases, and hallucinations during inference, yet existing approaches struggle to detect and localize such issues efficiently. This work treats the model’s softmax classifier as an energy-based model and introduces two training-free metrics—overflow energy and marginalized energy—that analyze energy overflow during decoding and sequence-level energy consistency. By directly computing energy discrepancies from output logits, the method enables probe-free, activation-ablation-free hallucination detection. Experiments across nine benchmarks and synthetic algebraic tasks demonstrate that the approach exhibits strong robustness and generalization across multiple mainstream large language models.

Technology Category

Application Category

📝 Abstract
We reinterpret the final Large Language Model (LLM) softmax classifier as an Energy-Based Model (EBM), decomposing the sequence-to-sequence probability chain into multiple interacting EBMs at inference. This principled approach allows us to track "energy spills" during decoding, which we empirically show correlate with factual errors, biases, and failures. Similar to Orgad et al. (2025), our method localizes the exact answer token and subsequently tests for hallucinations. Crucially, however, we achieve this without requiring trained probe classifiers or activation ablations. Instead, we introduce two completely training-free metrics derived directly from output logits: spilled energy, which captures the discrepancy between energy values across consecutive generation steps that should theoretically match, and marginalized energy, which is measurable at a single step. Evaluated on nine benchmarks across state-of-the-art LLMs (including LLaMA, Mistral, and Gemma) and on synthetic algebraic operations (Qwen3), our approach demonstrates robust, competitive hallucination detection and cross-task generalization. Notably, these results hold for both pretrained and instruction-tuned variants without introducing any training overhead.
Problem

Research questions and friction points this paper is trying to address.

hallucination
large language models
energy-based models
factual errors
bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Energy-Based Model
Spilled Energy
Hallucination Detection
Training-Free Metric
Logit Analysis
🔎 Similar Papers
No similar papers found.