🤖 AI Summary
This paper investigates whether large language models (LLMs) can implicitly maintain high-fidelity, structured world models—using chess as a benchmark—during long-sequence generation, without relying on introspection of internal model mechanisms.
Method: We propose a model-agnostic state evaluation framework grounded in domain semantics: chess rules define state representations, and legal move distributions (“state affordances”) serve as the primary metric for semantic fidelity, replacing dependence on internal activations.
Contribution/Results: The framework is both interpretable and broadly applicable, enabling cross-model and cross-scale diagnosis of long-range state tracking. Experiments reveal systematic state degradation in LLMs during extended reasoning, demonstrating that our approach provides the first reliable, quantitative, and parameter-free tool for assessing structured reasoning—without requiring access to model internals.
📝 Abstract
Large Language Models (LLMs) exhibit emergent capabilities in structured domains, suggesting they may implicitly internalize high-fidelity representations of world models. While probing techniques have shown promising signs of this in scientific and game-based settings, they rely on model-specific internal activations, which limit interpretability and generalizability. In this work, we propose a model-agnostic, state-based evaluation framework using chess as a benchmark to assess whether LLMs preserve the semantics of structured environments. Our method analyzes the downstream legal move distributions (state affordances) to estimate semantic fidelity between predicted and actual game states. This approach offers a more meaningful evaluation than conventional string-based metrics by aligning more closely with the strategic and rule-governed nature of chess. Experimental results demonstrate that our metrics capture deficiencies in state-tracking, highlighting limitations of LLMs in maintaining coherent internal models over long sequences. Our framework provides a robust tool for evaluating structured reasoning in LLMs without requiring internal model access, and generalizes to a wide class of symbolic environments.