π€ AI Summary
This work identifies and formalizes βself-consistent errorsβ in large language models (LLMs)βa critical yet overlooked class of errors wherein the model repeatedly generates the same incorrect answer across multiple stochastic samplings. Such errors evade detection by mainstream methods, and their prevalence does not diminish with increasing model scale. To address this, we propose a cross-model hidden-state probing method that fuses latent-layer evidence from external verification models to enhance detection robustness. We further introduce a multi-dimensional error-pattern statistical evaluation framework to systematically characterize self-consistent errors. Experiments across three representative LLM families demonstrate that our approach significantly improves detection accuracy, systematically overcoming four fundamental limitations inherent in existing methods for identifying self-consistent errors.
π Abstract
As large language models (LLMs) often generate plausible but incorrect content, error detection has become increasingly critical to ensure truthfulness. However, existing detection methods often overlook a critical problem we term as self-consistent error, where LLMs repeatly generate the same incorrect response across multiple stochastic samples. This work formally defines self-consistent errors and evaluates mainstream detection methods on them. Our investigation reveals two key findings: (1) Unlike inconsistent errors, whose frequency diminishes significantly as LLM scale increases, the frequency of self-consistent errors remains stable or even increases. (2) All four types of detection methshods significantly struggle to detect self-consistent errors. These findings reveal critical limitations in current detection methods and underscore the need for improved methods. Motivated by the observation that self-consistent errors often differ across LLMs, we propose a simple but effective cross-model probe method that fuses hidden state evidence from an external verifier LLM. Our method significantly enhances performance on self-consistent errors across three LLM families.