MAPGD: Multi-Agent Prompt Gradient Descent for Collaborative Prompt Optimization

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing prompt engineering methods suffer from reliance on a single optimization trajectory, limited perspective, severe gradient conflicts, and high computational overhead. To address these issues, this paper proposes MAPGD—a Multi-Agent Prompt Gradient Descent framework. MAPGD employs a task-decomposition agent for specialized role assignment, designs a semantic gradient fusion mechanism to harmonize heterogeneous gradient directions, and integrates a bandit-based candidate selection strategy with gradient-descent-inspired heuristic search—ensuring theoretical convergence while enhancing optimization efficiency and robustness. The framework enables interpretable prompt evolution and demonstrates significant improvements over single-agent and random baselines across classification, generation, and reasoning tasks: average accuracy increases by 7.2%, iteration count decreases by 38%, and computational efficiency is preserved alongside performance gains.

Technology Category

Application Category

📝 Abstract
Prompt engineering is crucial for leveraging large language models (LLMs), but existing methods often rely on a single optimization trajectory, limiting adaptability and efficiency while suffering from narrow perspectives, gradient conflicts, and high computational cost. We propose MAPGD (Multi-Agent Prompt Gradient Descent), a framework integrating multi-agent collaboration with gradient-based optimization. MAPGD features specialized agents for task clarity, example selection, format design, and stylistic refinement; semantic gradient coordination to resolve conflicts; bandit-based candidate selection for efficient exploration-exploitation; and theoretical convergence guarantees. Experiments on classification, generation, and reasoning tasks show MAPGD outperforms single-agent and random baselines in accuracy and efficiency. Ablations confirm the benefits of gradient fusion, agent specialization, and conflict resolution, providing a unified, gradient-inspired multi-agent approach to robust and interpretable prompt optimization.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts for large language models collaboratively
Resolving gradient conflicts and narrow perspectives in optimization
Reducing computational costs while improving adaptability and efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent collaboration with gradient optimization
Semantic gradient coordination resolves conflicts
Bandit-based candidate selection balances exploration-exploitation
🔎 Similar Papers
No similar papers found.
Y
Yichen Han
South China Normal University
Bojun Liu
Bojun Liu
University of Science and Technology of China
image compressionpoint cloud compression
Z
Zhengpeng Zhou
Shanghai Jiaotong University
G
Guanyu Liu
University of Macau
Z
Zeng Zhang
South China Normal University
Y
Yang Yang
Silicon Sapiens LLC
W
Wenli Wang
Silicon Sapiens LLC
I
Isaac N Shi
Silicon Sapiens LLC
Y
Yunyan
Silicon Sapiens LLC
Lewei He
Lewei He
South China Normal University
3D PrintingDeep Learning
Tianyu Shi
Tianyu Shi
University of Toronto
Reinforcement learningIntelligent Transportation SystemLarge Language ModelsAILLM agent