🤖 AI Summary
This work addresses a critical yet overlooked issue in tool-augmented large language models: inconsistent semantics of interpreter state persistence between training and deployment, which can lead to execution errors or severe inefficiencies. For the first time, state persistence is explicitly modeled as a first-order semantic property of training data. The authors construct paired trajectories on the Opaque Knapsack task that differ only in state persistence behavior and perform a 2×2 cross-fine-tuning and evaluation study on Qwen3-8B. Results demonstrate that aligning state persistence between training and deployment substantially improves performance—mismatches incur up to 80% error rates or 3.5× redundant token consumption, while solution quality remains largely unaffected. This study reveals the pivotal role of state persistence in shaping agent reasoning paths, stability, and token efficiency.
📝 Abstract
Tool-augmented LLMs are increasingly deployed as agents that interleave natural-language reasoning with executable Python actions, as in CodeAct-style frameworks. In deployment, these agents rely on runtime state that persists across steps. By contrast, common training pipelines treat agent traces as token sequences, with execution semantics left implicit. This raises a data-centric question: Is state persistence merely an inference-time scaffold, or can models learn to exploit it when training data exposes the corresponding execution semantics?
We isolate state persistence as a training-time variable. We introduce Opaque Knapsack, a procedurally generated family of partially observable optimization tasks designed to prevent one-shot solutions. Item attributes and constraints are hidden behind budgeted tool calls, forcing multi-turn control flow and iterative state revision. Holding task instances, prompts, tools, model, and supervision fixed, we generate paired trajectories differing only in whether interpreter state persists across steps or resets after each action. We then fine-tune identical base models (Qwen3-8B) on each trace variant and evaluate all four train-runtime combinations.
Our 2x2 cross-evaluation shows that execution semantics primarily affect how agents reach solutions, not whether they do: solution quality is statistically indistinguishable across conditions, but token cost and stability differ substantially. A persistent-trained model in a stateless runtime triggers missing-variable errors in roughly 80% of episodes; a stateless-trained model in a persistent runtime redundantly re-derives retained state, using roughly 3.5x more tokens.
Interpreter persistence should be treated as a first-class semantic of agent traces. Aligning fine-tuning data with deployment runtimes improves efficiency and reduces brittle train-runtime mismatches.