π€ AI Summary
LLM agents frequently fail in multi-step tasks due to unmet preconditions, redundant commands, or misjudged environmental constraints. To address this, we propose a lightweight, runtime-retrieval-free knowledge internalization method that distills the retrieval-augmented reasoning capability of RAG into the student modelβs intrinsic reasoning capacity via knowledge distillation. Specifically, we automatically extract compact prompts from failure trajectories to construct high-quality teacher trajectories, and integrate one-shot retrieval with prompt-string removal during training to enable scalable knowledge transfer across model sizes and architectures. Evaluated on ALFWorld and WebShop benchmarks, our approach achieves a success rate of 91% (+12 percentage points) and a score of 72 (+11), respectively, while reducing inference token consumption by 10β60%. This significantly alleviates RAGβs dependency on external knowledge bases and computational overhead.
π Abstract
Large language model (LLM) agents deployed for multi-step tasks frequently fail in predictable ways: attempting actions with unmet preconditions, issuing redundant commands, or mishandling environment constraints. While retrieval-augmented generation (RAG) can improve performance by providing runtime guidance, it requires maintaining external knowledge databases and adds computational overhead at every deployment. We propose a simple pipeline that converts inference-time retrieval into learned competence through distillation. Our approach: (1) extracts compact, reusable hints from agent failures, (2) uses these hints to generate improved teacher trajectories via one-shot retrieval at episode start, and (3) trains student models on these trajectories with hint strings removed, forcing internalization rather than memorization. Across two interactive benchmarks, ALFWorld (household tasks) and WebShop (online shopping), distilled students consistently outperform baseline agents, achieving up to 91% success on ALFWorld (vs. 79% for baselines) and improving WebShop scores to 72 (vs. 61 for baselines), while using 10-60% fewer tokens than retrieval-augmented teachers depending on the environment. The approach generalizes across model scales (7B/14B parameters) and agent architectures (ReAct/StateAct), demonstrating that retrieval benefits can be effectively internalized through targeted fine-tuning without permanent runtime dependencies.