Lessons Learned: A Multi-Agent Framework for Code LLMs to Learn and Improve

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-code large language model (LLM) agents struggle to collaboratively enhance performance when their individual expertise is unknown. Method: We propose a prior-knowledge-free multi-agent collaboration framework centered on a novel “lessons solicitation–banking–selection” mechanism: agents structurally encode their successful and failed execution traces as reusable lessons, store them in a shared memory bank, and autonomously retrieve, select, and integrate complementary lessons via experience-driven dynamic routing. The approach integrates multi-agent systems, lightweight prompt engineering, and adaptive routing, enabling efficient cooperation among 7B-scale models (e.g., CodeLlama, StarCoder). Contribution/Results: Experiments on code optimization tasks demonstrate that ensembles of 7B models under our framework outperform both a standalone 70B model and state-of-the-art multi-LLM methods, achieving substantial gains in accuracy and robustness.

Technology Category

Application Category

📝 Abstract
Recent studies show that LLMs possess different skills and specialize in different tasks. In fact, we observe that their varied performance occur in several levels of granularity. For example, in the code optimization task, code LLMs excel at different optimization categories and no one dominates others. This observation prompts the question of how one leverages multiple LLM agents to solve a coding problem without knowing their complementary strengths a priori. We argue that a team of agents can learn from each other's successes and failures so as to improve their own performance. Thus, a lesson is the knowledge produced by an agent and passed on to other agents in the collective solution process. We propose a lesson-based collaboration framework, design the lesson solicitation--banking--selection mechanism, and demonstrate that a team of small LLMs with lessons learned can outperform a much larger LLM and other multi-LLM collaboration methods.
Problem

Research questions and friction points this paper is trying to address.

Leveraging multiple LLM agents for coding tasks without prior knowledge of their strengths
Improving code LLM performance through collaborative learning from successes and failures
Enhancing small LLM teams' effectiveness via lesson-based collaboration frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework for code LLMs
Lesson solicitation-banking-selection mechanism
Small LLMs outperform larger ones
🔎 Similar Papers
No similar papers found.