Exploring State Tracking Capabilities of Large Language Models

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the ability of large language models (LLMs) to maintain consistent entity state representations across multi-step reasoning—focusing on failure modes in long-horizon state tracking. To this end, we introduce three structured, reproducible state-tracking benchmark tasks and propose the first dedicated benchmarking framework designed to isolate and evaluate state-tracking capability. Through systematic experiments, we find that GPT-4 and Llama-3 exhibit markedly stronger cross-step state consistency under chain-of-thought prompting compared to earlier models, while predecessor models suffer pervasive state decay in longer sequences. Our study provides the first empirical evidence of a qualitative leap in state-tracking competence across model generations, revealing a critical inflection point in LLM reasoning robustness. Beyond identifying this generational shift, we establish a new evaluation paradigm for reasoning fidelity and release an extensible, open-source test suite to support future research on LLM state maintenance and logical consistency.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in solving complex tasks, including those requiring a certain level of reasoning. In this paper, we focus on state tracking, a problem where models need to keep track of the state governing a number of entities. To isolate the state tracking component from other factors, we propose a benchmark based on three well-defined state tracking tasks and analyse the performance of LLMs in different scenarios. The results indicate that the recent generation of LLMs (specifically, GPT-4 and Llama3) are capable of tracking state, especially when integrated with mechanisms such as Chain of Thought. However, models from the former generation, while understanding the task and being able to solve it at the initial stages, often fail at this task after a certain number of steps.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to track entity states
Isolating state tracking from other reasoning factors
Testing model performance degradation over multiple steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Chain of Thought for state tracking
Testing LLMs on three benchmark tasks
Evaluating GPT-4 and Llama3 tracking capabilities
🔎 Similar Papers
No similar papers found.
K
Kiamehr Rezaee
Cardiff NLP, School of Computer Science and Informatics
J
José Camacho-Collados
Cardiff NLP, School of Computer Science and Informatics
Mohammad Taher Pilehvar
Mohammad Taher Pilehvar
Cardiff University / TeIAS / Cambridge
Artificial IntelligenceNatural Language ProcessingLexical SemanticsSemantic Representation