🤖 AI Summary
This work investigates the robustness of knowledge representation in large language models (LLMs), revealing that their factual judgment relies heavily on superficial surface forms in training data rather than deep semantics—rendering them highly sensitive to semantically preserving perturbations such as misspellings and syntactic rephrasing. To systematically assess this, we propose a semantic-invariance–driven perturbation framework, evaluating four mainstream LLMs across five knowledge-evaluation benchmarks and three knowledge probing methods. Our analysis quantitatively demonstrates, for the first time, that internal representational separability between true and false statements deteriorates sharply as inputs deviate from the pretraining distribution. These findings empirically substantiate the shallow and non-robust nature of LLM knowledge encoding, challenge the reliability of existing factual consistency detection methods, and provide novel empirical evidence and theoretical insight into the generalization limitations of foundation models.
📝 Abstract
For Large Language Models (LLMs) to be reliable, they must learn robust knowledge that can be generally applied in diverse settings -- often unlike those seen during training. Yet, extensive research has shown that LLM performance can be brittle, with models exhibiting excessive sensitivity to trivial input variations. In this work, we explore whether this brittleness is a direct result of unstable internal knowledge representations. To explore this question, we build on previous work showing that LLM representations encode statement truthfulness -- i.e., true, factual statements can be easily separated from false, inaccurate ones. Specifically, we test the robustness of learned knowledge by evaluating representation separability on samples that have undergone superficial transformations to drive them out-of-distribution (OOD), such as typos or reformulations. By applying semantically-preserving perturbations, we study how separability degrades as statements become more OOD, across four LLM families, five evaluation datasets, and three knowledge probing methods. Our results reveal that internal representations of statement truthfulness collapse as the samples' presentations become less similar to those seen during pre-training. While LLMs can often distinguish between true and false statements when they closely resemble the pre-training data, this ability is highly dependent on the statement's exact surface form. These findings offer a possible explanation for brittle benchmark performance: LLMs may learn shallow, non-robust knowledge representations that allow for only limited generalizability. Our work presents a fundamental challenge for the utility of truthfulness probes, and more broadly, calls for further research on improving the robustness of learned knowledge representations.