🤖 AI Summary
This work investigates the ability of large language models (LLMs) to maintain consistent entity state representations across multi-step reasoning—focusing on failure modes in long-horizon state tracking. To this end, we introduce three structured, reproducible state-tracking benchmark tasks and propose the first dedicated benchmarking framework designed to isolate and evaluate state-tracking capability. Through systematic experiments, we find that GPT-4 and Llama-3 exhibit markedly stronger cross-step state consistency under chain-of-thought prompting compared to earlier models, while predecessor models suffer pervasive state decay in longer sequences. Our study provides the first empirical evidence of a qualitative leap in state-tracking competence across model generations, revealing a critical inflection point in LLM reasoning robustness. Beyond identifying this generational shift, we establish a new evaluation paradigm for reasoning fidelity and release an extensible, open-source test suite to support future research on LLM state maintenance and logical consistency.
📝 Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in solving complex tasks, including those requiring a certain level of reasoning. In this paper, we focus on state tracking, a problem where models need to keep track of the state governing a number of entities. To isolate the state tracking component from other factors, we propose a benchmark based on three well-defined state tracking tasks and analyse the performance of LLMs in different scenarios. The results indicate that the recent generation of LLMs (specifically, GPT-4 and Llama3) are capable of tracking state, especially when integrated with mechanisms such as Chain of Thought. However, models from the former generation, while understanding the task and being able to solve it at the initial stages, often fail at this task after a certain number of steps.