MoRAgent: Parameter Efficient Agent Tuning with Mixture-of-Roles

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient role modeling in parameter-efficient fine-tuning (PEFT) of large language model (LLM) agents, this paper proposes role decoupling—decomposing agent capabilities into three specialized roles: Reasoner, Executor, and Summarizer—and introduces a LoRA-based multi-role collaborative adaptation framework. Its core contributions are: (1) the first Mixture-of-Roles (MoR) architecture, enabling dynamic coordination among role-specific LoRA modules; and (2) a role-driven data synthesis and reliability verification pipeline supporting the Reason+Action paradigm. Experiments demonstrate that our method introduces fewer than 0.1% trainable parameters yet consistently outperforms existing PEFT approaches across multiple LLMs and agent benchmarks, achieving performance on par with full fine-tuning.

Technology Category

Application Category

📝 Abstract
Despite recent advancements of fine-tuning large language models (LLMs) to facilitate agent tasks, parameter-efficient fine-tuning (PEFT) methodologies for agent remain largely unexplored. In this paper, we introduce three key strategies for PEFT in agent tasks: 1) Inspired by the increasingly dominant Reason+Action paradigm, we first decompose the capabilities necessary for the agent tasks into three distinct roles: reasoner, executor, and summarizer. The reasoner is responsible for comprehending the user's query and determining the next role based on the execution trajectory. The executor is tasked with identifying the appropriate functions and parameters to invoke. The summarizer conveys the distilled information from conversations back to the user. 2) We then propose the Mixture-of-Roles (MoR) framework, which comprises three specialized Low-Rank Adaptation (LoRA) groups, each designated to fulfill a distinct role. By focusing on their respective specialized capabilities and engaging in collaborative interactions, these LoRAs collectively accomplish the agent task. 3) To effectively fine-tune the framework, we develop a multi-role data generation pipeline based on publicly available datasets, incorporating role-specific content completion and reliability verification. We conduct extensive experiments and thorough ablation studies on various LLMs and agent benchmarks, demonstrating the effectiveness of the proposed method. This project is publicly available at https://mor-agent.github.io.
Problem

Research questions and friction points this paper is trying to address.

Efficiently fine-tunes LLMs for agent tasks with minimal parameters
Decomposes agent capabilities into specialized reasoner, executor, summarizer roles
Generates multi-role training data for reliable and collaborative agent tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes agent tasks into three specialized role components
Uses Mixture-of-Roles with three distinct LoRA groups
Implements multi-role data generation with verification pipeline
🔎 Similar Papers
No similar papers found.
Jing Han
Jing Han
University of Cambridge
deep learningaudio signal processingmachine learningmHealthaffective computing
B
Binwei Yan
Huawei Noah’s Ark Lab
T
Tianyu Guo
Huawei Noah’s Ark Lab
Z
Zheyuan Bai
Huawei Noah’s Ark Lab
M
Mengyu Zheng
Huawei Noah’s Ark Lab
Hanting Chen
Hanting Chen
Noah's Ark Lab, Huawei
deep learningmachine learningcomputer vision
Y
Ying Nie
Huawei Noah’s Ark Lab