Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address pervasive context adaptation challenges in large language model (LLM) applications—namely, *conciseness bias* (over-summarization eroding domain-specific insights) and *context collapse* (progressive degradation of details during iterative rewriting)—this paper proposes the ACE framework. ACE models context as a dynamically evolving strategy manual, implementing structured, incremental updates via three modular stages: Generate, Reflect, and Curate, enabling unsupervised self-improvement. Its key innovation is an adaptive memory mechanism grounded in a dynamic “cheat sheet,” integrating long-context modeling, execution-feedback-driven evolution, and offline/online co-optimization. Evaluated on agent-centric and financial-domain tasks, ACE achieves +10.6% and +8.6% performance gains, respectively, while substantially reducing adaptation latency and inference cost. It outperforms leading production-grade agents on the AppWorld leaderboard.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation -- modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. Building on the adaptive memory introduced by Dynamic Cheatsheet, we introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.
Problem

Research questions and friction points this paper is trying to address.

Addresses brevity bias and context collapse in LLM adaptation
Develops evolving playbooks to accumulate and refine strategies
Enables scalable self-improving systems without labeled supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolving playbooks accumulate, refine, and organize strategies
Structured incremental updates prevent context collapse and preserve details
Leverages natural execution feedback for self-improvement without supervision
🔎 Similar Papers
No similar papers found.
Q
Qizheng Zhang
Stanford University
Changran Hu
Changran Hu
University of California, Berkeley
LLMlong contextAgentic AIPost Training
S
Shubhangi Upasani
SambaNova Systems, Inc.
B
Boyuan Ma
SambaNova Systems, Inc.
F
Fenglu Hong
SambaNova Systems, Inc.
Vamsidhar Kamanuru
Vamsidhar Kamanuru
University of California San Diego
GenAIMLRobotics
J
Jay Rainton
SambaNova Systems, Inc.
C
Chen Wu
SambaNova Systems, Inc.
M
Mengmeng Ji
SambaNova Systems, Inc.
H
Hanchen Li
UC Berkeley
U
Urmish Thakker
SambaNova Systems, Inc.
James Zou
James Zou
Stanford University
Machine learningcomputational biologycomputational healthstatisticsbiotech
Kunle Olukotun
Kunle Olukotun
Cadence Design Systems Professor of Computer Science, Stanford University
computer architectureparallel computingprogramming languages