🤖 AI Summary
This study addresses hallucination in large language models (LLMs) triggered by modifiers, negations, and numerals—symbolic linguistic elements. We propose the first cross-layer hallucination localization framework integrating symbolic linguistics knowledge. Methodologically, we combine local sensitivity analysis (LSC), attention activation variance tracking, and symbolic semantic annotation to systematically characterize the internal evolution of hallucinations. We identify that hallucinations originate from symbolic semantic representation collapse in early transformer layers (2–4), reflecting a fundamental failure in symbolic processing—not merely generative inaccuracy; negation triggers induce significant network-wide attention variance surges. Evaluated on HaluEval and TruthfulQA across five mainstream LLMs, our analysis reveals hallucination rates of 78.3%–83.7% for Gemma variants. This work establishes the first symbolic-linguistic modeling of hallucination mechanisms, offering a novel paradigm for enhancing model interpretability and robustness.
📝 Abstract
LLMs still struggle with hallucination, especially when confronted with symbolic triggers like modifiers, negation, numbers, exceptions, and named entities. Yet, we lack a clear understanding of where these symbolic hallucinations originate, making it crucial to systematically handle such triggers and localize the emergence of hallucination inside the model. While prior work explored localization using statistical techniques like LSC and activation variance analysis, these methods treat all tokens equally and overlook the role symbolic linguistic knowledge plays in triggering hallucinations. So far, no approach has investigated how symbolic elements specifically drive hallucination failures across model layers, nor has symbolic linguistic knowledge been used as the foundation for a localization framework. We propose the first symbolic localization framework that leverages symbolic linguistic and semantic knowledge to meaningfully trace the development of hallucinations across all model layers. By focusing on how models process symbolic triggers, we analyze five models using HaluEval and TruthfulQA. Our symbolic knowledge approach reveals that attention variance for these linguistic elements explodes to critical instability in early layers (2-4), with negation triggering catastrophic variance levels, demonstrating that symbolic semantic processing breaks down from the very beginning. Through the lens of symbolic linguistic knowledge, despite larger model sizes, hallucination rates remain consistently high (78.3%-83.7% across Gemma variants), with steep attention drops for symbolic semantic triggers throughout deeper layers. Our findings demonstrate that hallucination is fundamentally a symbolic linguistic processing failure, not a general generation problem, revealing that symbolic semantic knowledge provides the key to understanding and localizing hallucination mechanisms in LLMs.