🤖 AI Summary
Existing prompt engineering methods suffer from reliance on a single optimization trajectory, limited perspective, severe gradient conflicts, and high computational overhead. To address these issues, this paper proposes MAPGD—a Multi-Agent Prompt Gradient Descent framework. MAPGD employs a task-decomposition agent for specialized role assignment, designs a semantic gradient fusion mechanism to harmonize heterogeneous gradient directions, and integrates a bandit-based candidate selection strategy with gradient-descent-inspired heuristic search—ensuring theoretical convergence while enhancing optimization efficiency and robustness. The framework enables interpretable prompt evolution and demonstrates significant improvements over single-agent and random baselines across classification, generation, and reasoning tasks: average accuracy increases by 7.2%, iteration count decreases by 38%, and computational efficiency is preserved alongside performance gains.
📝 Abstract
Prompt engineering is crucial for leveraging large language models (LLMs), but existing methods often rely on a single optimization trajectory, limiting adaptability and efficiency while suffering from narrow perspectives, gradient conflicts, and high computational cost. We propose MAPGD (Multi-Agent Prompt Gradient Descent), a framework integrating multi-agent collaboration with gradient-based optimization. MAPGD features specialized agents for task clarity, example selection, format design, and stylistic refinement; semantic gradient coordination to resolve conflicts; bandit-based candidate selection for efficient exploration-exploitation; and theoretical convergence guarantees. Experiments on classification, generation, and reasoning tasks show MAPGD outperforms single-agent and random baselines in accuracy and efficiency. Ablations confirm the benefits of gradient fusion, agent specialization, and conflict resolution, providing a unified, gradient-inspired multi-agent approach to robust and interpretable prompt optimization.