MASPOB: Bandit-Based Prompt Optimization for Multi-Agent Systems with Graph Neural Networks

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key challenges in prompt optimization for multi-agent systems: high evaluation costs, topological coupling among prompts, and combinatorial explosion. To tackle these issues, the authors propose MASPOB, a novel framework that leverages graph neural networks (GNNs) to model the topological dependencies between prompts and employs coordinate ascent to reduce the search space from exponential to linear scale. Additionally, MASPOB integrates an Upper Confidence Bound (UCB)-based bandit algorithm to balance exploration and exploitation under limited evaluation budgets. Experimental results demonstrate that MASPOB significantly outperforms existing prompt optimization methods across multiple benchmarks, achieving state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved great success in many real-world applications, especially the one serving as the cognitive backbone of Multi-Agent Systems (MAS) to orchestrate complex workflows in practice. Since many deployment scenarios preclude MAS workflow modifications and its performance is highly sensitive to the input prompts, prompt optimization emerges as a more natural approach to improve its performance. However, real-world prompt optimization for MAS is impeded by three key challenges: (1) the need of sample efficiency due to prohibitive evaluation costs, (2) topology-induced coupling among prompts, and (3) the combinatorial explosion of the search space. To address these challenges, we introduce MASPOB (Multi-Agent System Prompt Optimization via Bandits), a novel sample-efficient framework based on bandits. By leveraging Upper Confidence Bound (UCB) to quantify uncertainty, the bandit framework balances exploration and exploitation, maximizing gains within a strictly limited budget. To handle topology-induced coupling, MASPOB integrates Graph Neural Networks (GNNs) to capture structural priors, learning topology-aware representations of prompt semantics. Furthermore, it employs coordinate ascent to decompose the optimization into univariate sub-problems, reducing search complexity from exponential to linear. Extensive experiments across diverse benchmarks demonstrate that MASPOB achieves state-of-the-art performance, consistently outperforming existing baselines.
Problem

Research questions and friction points this paper is trying to address.

prompt optimization
multi-agent systems
sample efficiency
topology-induced coupling
combinatorial explosion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bandit-based optimization
Graph Neural Networks
Prompt optimization
Multi-Agent Systems
Coordinate ascent
🔎 Similar Papers
No similar papers found.