Cross-Task Experiential Learning on LLM-based Multi-Agent Collaboration

πŸ“… 2025-05-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM-based multi-agent systems process tasks in isolation, leading to computational redundancy and poor cross-task generalization. To address this, we propose a graph-structured multi-agent collaborative network and introduce Multi-Agent Experience Learning (MAEL)β€”the first framework enabling explicit cross-task experience reuse. MAEL models task-experience relationships via a graph neural network, constructs a retrievable individual experience pool, incorporates a reward-driven, step-level quality evaluation mechanism, and designs a similarity-based few-shot experience retrieval strategy. This enables explicit accumulation, quantitative assessment, and dynamic reuse of experiences. Evaluated on multiple benchmark datasets, our approach significantly improves convergence speed and problem-solving accuracy, demonstrating that cross-task experience transfer substantially enhances both the efficiency and quality of collaborative reasoning.

Technology Category

Application Category

πŸ“ Abstract
Large Language Model-based multi-agent systems (MAS) have shown remarkable progress in solving complex tasks through collaborative reasoning and inter-agent critique. However, existing approaches typically treat each task in isolation, resulting in redundant computations and limited generalization across structurally similar tasks. To address this, we introduce multi-agent cross-task experiential learning (MAEL), a novel framework that endows LLM-driven agents with explicit cross-task learning and experience accumulation. We model the task-solving workflow on a graph-structured multi-agent collaboration network, where agents propagate information and coordinate via explicit connectivity. During the experiential learning phase, we quantify the quality for each step in the task-solving workflow and store the resulting rewards along with the corresponding inputs and outputs into each agent's individual experience pool. During inference, agents retrieve high-reward, task-relevant experiences as few-shot examples to enhance the effectiveness of each reasoning step, thereby enabling more accurate and efficient multi-agent collaboration. Experimental results on diverse datasets demonstrate that MAEL empowers agents to learn from prior task experiences effectively-achieving faster convergence and producing higher-quality solutions on current tasks.
Problem

Research questions and friction points this paper is trying to address.

Enables cross-task learning for LLM-based multi-agent systems
Reduces redundant computations in similar tasks
Improves collaboration via experience retrieval and reuse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-structured multi-agent collaboration network
Cross-task experiential learning framework
Experience pool with reward-based retrieval
πŸ”Ž Similar Papers
No similar papers found.
Yilong Li
Yilong Li
PhD, Stanford University
operating systemsdistributed systemsdatacenter computingnetworking
C
Cheng Qian
Shanghai Jiao Tong University
Y
Yu Xia
Tsinghua University
R
Ruijie Shi
Tsinghua University
Yufan Dang
Yufan Dang
Tsinghua University
Natural Language ProcessingMachine LearningArtificial Intelligence
Z
Zihao Xie
Tsinghua University
Z
Ziming You
Peking University
Weize Chen
Weize Chen
Tsinghua University
NLPML
C
Cheng Yang
Beijing University of Posts and Telecommunications
W
Weichuan Liu
Siemens
Y
Ye Tian
Tencent Robotics X
X
Xuantang Xiong
Tencent Robotics X
L
Lei Han
Tencent Robotics X
Z
Zhiyuan Liu
Tsinghua University
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing