Beyond Manuals and Tasks: Instance-Level Context Learning for LLM Agents

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical omission in current LLM-based agents: the neglect of *instance-level context*—verifiable, reusable, environment-bound facts (e.g., object locations, synthesis recipes)—leading to unreliable decision-making in complex tasks. To address this, we formally define the *Instance-Level Context Learning* (ILCL) problem and propose the first annotation-free, transferable framework for constructing such context. Our approach employs a TODO-tree-guided exploration strategy coupled with a lightweight *plan-execute-extract* loop, transforming one-time environmental interaction into a structured, high-precision knowledge base. Evaluated on TextWorld, ALFWorld, and Crafter, ILCL achieves substantial performance gains: on TextWorld, ReAct success rises from 37% to 95%, and IGE from 81% to 95%. Moreover, ILCL improves reasoning efficiency and cross-task generalization, demonstrating robustness beyond domain-specific fine-tuning.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) agents typically receive two kinds of context: (i) environment-level manuals that define interaction interfaces and global rules, and (ii) task-level guidance or demonstrations tied to specific goals. In this work, we identify a crucial but overlooked third type of context, instance-level context, which consists of verifiable and reusable facts tied to a specific environment instance, such as object locations, crafting recipes, and local rules. We argue that the absence of instance-level context is a common source of failure for LLM agents in complex tasks, as success often depends not only on reasoning over global rules or task prompts but also on making decisions based on precise and persistent facts. Acquiring such context requires more than memorization: the challenge lies in efficiently exploring, validating, and formatting these facts under tight interaction budgets. We formalize this problem as Instance-Level Context Learning (ILCL) and introduce our task-agnostic method to solve it. Our method performs a guided exploration, using a compact TODO forest to intelligently prioritize its next actions and a lightweight plan-act-extract loop to execute them. This process automatically produces a high-precision context document that is reusable across many downstream tasks and agents, thereby amortizing the initial exploration cost. Experiments across TextWorld, ALFWorld, and Crafter demonstrate consistent gains in both success and efficiency: for instance, ReAct's mean success rate in TextWorld rises from 37% to 95%, while IGE improves from 81% to 95%. By transforming one-off exploration into persistent, reusable knowledge, our method complements existing contexts to enable more reliable and efficient LLM agents.
Problem

Research questions and friction points this paper is trying to address.

LLM agents lack instance-level context for complex tasks
Efficiently acquiring verifiable facts under tight interaction constraints
Transforming exploration into reusable knowledge for multiple tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces instance-level context learning for agents
Uses TODO forest for guided exploration prioritization
Implements plan-act-extract loop for context documentation
🔎 Similar Papers
No similar papers found.
K
Kuntai Cai
ByteDance
J
Juncheng Liu
National University of Singapore
Xianglin Yang
Xianglin Yang
National University of Singapore
Visualizationexplanable ai
Z
Zhaojie Niu
ByteDance
Xiaokui Xiao
Xiaokui Xiao
National University of Singapore
DatabasesData ManagementData Privacy
X
Xing Chen
ByteDance