π€ AI Summary
Current LLM-based agents are constrained by a purely textual paradigm, rendering them susceptible to context drift and fragile multi-turn dependencies in long-horizon tasks. This work proposes CaveAgent, a novel framework that introduces a stateful runtime mechanism, transforming the LLM from a text generator into a state-aware operator. By employing a dual-stream context architecture, CaveAgent decouples semantic reasoning from deterministic Python execution and enables cross-turn persistence and manipulation of complex objects. This approach transcends traditional text-binding limitations, effectively mitigating context drift and catastrophic forgetting. Evaluated on benchmarks such as TauΒ²-bench and BFCL, CaveAgent achieves substantial gains: a 10.5% improvement in retail task success rate, 28.4% reduction in total token consumption across multi-turn scenarios, and up to 59% fewer tokens in data-intensive tasks, while successfully handling large-scale data that causes context overflow in competing methods.
π Abstract
LLM-based agents are increasingly capable of complex task execution, yet current agentic systems remain constrained by text-centric paradigms. Traditional approaches rely on procedural JSON-based function calling, which often struggles with long-horizon tasks due to fragile multi-turn dependencies and context drift. In this paper, we present CaveAgent, a framework that transforms the paradigm from"LLM-as-Text-Generator"to"LLM-as-Runtime-Operator."We introduce a Dual-stream Context Architecture that decouples state management into a lightweight semantic stream for reasoning and a persistent, deterministic Python Runtime stream for execution. In addition to leveraging code generation to efficiently resolve interdependent sub-tasks (e.g., loops, conditionals) in a single step, we introduce \textit{Stateful Runtime Management} in CaveAgent. Distinct from existing code-based approaches that remain text-bound and lack the support for external object injection and retrieval, CaveAgent injects, manipulates, and retrieves complex Python objects (e.g., DataFrames, database connections) that persist across turns. This persistence mechanism acts as a high-fidelity external memory to eliminate context drift, avoid catastrophic forgetting, while ensuring that processed data flows losslessly to downstream applications. Comprehensive evaluations on Tau$^2$-bench, BFCL and various case studies across representative SOTA LLMs demonstrate CaveAgent's superiority. Specifically, our framework achieves a 10.5\% success rate improvement on retail tasks and reduces total token consumption by 28.4\% in multi-turn scenarios. On data-intensive tasks, direct variable storage and retrieval reduces token consumption by 59\%, allowing CaveAgent to handle large-scale data that causes context overflow failures in both JSON-based and Code-based agents.