Learning State-Tracking from Code Using Linear RNNs

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing sequence models in learning state-tracking tasks under the standard next-token prediction framework, particularly when actions are only partially observable. The authors propose an innovative approach that encodes combinatorial state-tracking problems—such as permutations and combinations—as REPL-style code traces containing variable assignments and print statements, thereby modeling probabilistic finite-state automata within the next-token prediction paradigm. Through systematic comparisons among linear RNNs, nonlinear RNNs, and Transformers, the study reveals that linear RNNs substantially outperform Transformers in fully observable settings, yet underperform relative to nonlinear RNNs under partial observability. These findings highlight both the strengths and inherent constraints of linear RNNs in state-tracking scenarios.

Technology Category

Application Category

📝 Abstract
Over the last years, state-tracking tasks, particularly permutation composition, have become a testbed to understand the limits of sequence models architectures like Transformers and RNNs (linear and non-linear). However, these are often sequence-to-sequence tasks: learning to map actions (permutations) to states, which is incompatible with the next-token prediction setting commonly used to train language models. We address this gap by converting permutation composition into code via REPL traces that interleave state-reveals through prints and variable transformations. We show that linear RNNs capable of state-tracking excel also in this setting, while Transformers still fail. Motivated by this representation, we investigate why tracking states in code is generally difficult: actions are not always fully observable. We frame this as tracking the state of a probabilistic finite-state automaton with deterministic state reveals and show that linear RNNs can be worse than non-linear RNNs at tracking states in this setup.
Problem

Research questions and friction points this paper is trying to address.

state-tracking
code
linear RNNs
permutation composition
next-token prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

linear RNNs
state tracking
REPL traces
next-token prediction
probabilistic finite-state automaton