Distilling LLM Agent into Small Models with Retrieval and Code Tools

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the susceptibility of small language models (SLMs) to hallucination and poor generalization in factual and mathematical reasoning, this paper proposes Agent Distillation—a novel framework for distilling knowledge from retrieval-augmented, code-executing LLM agents into 0.5B–3B-parameter SLMs. Methodologically, it (1) introduces a *first-thought prefix* prompting strategy to enhance teacher trajectory quality; (2) incorporates self-consistent action generation to improve reasoning robustness; and (3) integrates chain-of-thought distillation, retrieval-augmented generation (RAG), tool-use instruction, and multi-stage supervised fine-tuning. Experiments across eight factual and mathematical reasoning benchmarks demonstrate that the distilled 0.5B, 1.5B, and 3B models match or exceed the performance of corresponding 1.5B, 3B, and 7B CoT-distilled baselines, respectively. Notably, the approach significantly improves cross-domain generalization and tool-call accuracy—marking the first successful distillation of agent-level capabilities (retrieval + execution) into sub-3B models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at complex reasoning tasks but remain computationally expensive, limiting their practical deployment. To address this, recent works have focused on distilling reasoning capabilities into smaller language models (sLMs) using chain-of-thought (CoT) traces from teacher LLMs. However, this approach struggles in scenarios requiring rare factual knowledge or precise computation, where sLMs often hallucinate due to limited capability. In this work, we propose Agent Distillation, a framework for transferring not only reasoning capability but full task-solving behavior from LLM-based agents into sLMs with retrieval and code tools. We improve agent distillation along two complementary axes: (1) we introduce a prompting method called first-thought prefix to enhance the quality of teacher-generated trajectories; and (2) we propose a self-consistent action generation for improving test-time robustness of small agents. We evaluate our method on eight reasoning tasks across factual and mathematical domains, covering both in-domain and out-of-domain generalization. Our results show that sLMs as small as 0.5B, 1.5B, 3B parameters can achieve performance competitive with next-tier larger 1.5B, 3B, 7B models fine-tuned using CoT distillation, demonstrating the potential of agent distillation for building practical, tool-using small agents. Our code is available at https://github.com/Nardien/agent-distillation.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational cost of LLMs via distillation into smaller models
Enhance small models' accuracy in rare facts and precise computations
Transfer full task-solving behavior from LLM agents to small models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent Distillation transfers full task-solving behavior
First-thought prefix enhances teacher-generated trajectories
Self-consistent action generation improves test-time robustness
🔎 Similar Papers
No similar papers found.