A Survey of Context Engineering for Large Language Models

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) excel at complex contextual understanding but exhibit pronounced capability asymmetry—struggling to stably generate long, equally sophisticated texts. Method: We systematically establish a unified “context engineering” framework, proposing a four-dimensional taxonomy encompassing retrieval, generation, processing, and management. Based on a systematic review and architectural analysis of 1,300+ papers, we construct the first comprehensive context engineering technology map; identify the intrinsic mechanisms underlying the understanding–generation capability mismatch; and delineate architectural integration pathways for four key application paradigms: retrieval-augmented generation, memory modeling, tool integration, and multi-agent coordination. Contribution/Results: The work delivers a standardized conceptual framework, a strategic technology roadmap, and identified critical breakthrough directions—providing both theoretical foundations and practical guidance for developing advanced context-aware AI systems.

Technology Category

Application Category

📝 Abstract
The performance of Large Language Models (LLMs) is fundamentally determined by the contextual information provided during inference. This survey introduces Context Engineering, a formal discipline that transcends simple prompt design to encompass the systematic optimization of information payloads for LLMs. We present a comprehensive taxonomy decomposing Context Engineering into its foundational components and the sophisticated implementations that integrate them into intelligent systems. We first examine the foundational components: context retrieval and generation, context processing and context management. We then explore how these components are architecturally integrated to create sophisticated system implementations: retrieval-augmented generation (RAG), memory systems and tool-integrated reasoning, and multi-agent systems. Through this systematic analysis of over 1300 research papers, our survey not only establishes a technical roadmap for the field but also reveals a critical research gap: a fundamental asymmetry exists between model capabilities. While current models, augmented by advanced context engineering, demonstrate remarkable proficiency in understanding complex contexts, they exhibit pronounced limitations in generating equally sophisticated, long-form outputs. Addressing this gap is a defining priority for future research. Ultimately, this survey provides a unified framework for both researchers and engineers advancing context-aware AI.
Problem

Research questions and friction points this paper is trying to address.

Systematic optimization of information payloads for LLMs
Integration of context components into intelligent systems
Addressing asymmetry in model capabilities for context generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic optimization of information payloads
Taxonomy decomposing Context Engineering components
Integration of retrieval-augmented generation systems
🔎 Similar Papers
No similar papers found.