BOAD: Discovering Hierarchical Software Engineering Agents via Bandit Optimization

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited generalization in long-horizon, out-of-distribution software engineering tasks. Method: This paper proposes an automated method for discovering hierarchical multi-agent architectures, framing hierarchical structure search as a multi-armed bandit (MAB) problem to jointly optimize credit assignment and sub-agent design—without manual role specification—and enabling online hierarchical evolution under constrained evaluation budgets. The system employs a coordinator that orchestrates specialized LLM-driven sub-agents (e.g., for localization, editing, and verification) to collaboratively perform complex code repair. Results: On SWE-bench-Verified, our approach significantly outperforms both single-agent and hand-crafted multi-agent baselines. On the more challenging SWE-bench-Live benchmark, a 36B-parameter model ranks second overall, surpassing larger proprietary models including GPT-4 and Claude.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown strong reasoning and coding capabilities, yet they struggle to generalize to real-world software engineering (SWE) problems that are long-horizon and out of distribution. Existing systems often rely on a single agent to handle the entire workflow-interpreting issues, navigating large codebases, and implementing fixes-within one reasoning chain. Such monolithic designs force the model to retain irrelevant context, leading to spurious correlations and poor generalization. Motivated by how human engineers decompose complex problems, we propose structuring SWE agents as orchestrators coordinating specialized sub-agents for sub-tasks such as localization, editing, and validation. The challenge lies in discovering effective hierarchies automatically: as the number of sub-agents grows, the search space becomes combinatorial, and it is difficult to attribute credit to individual sub-agents within a team. We address these challenges by formulating hierarchy discovery as a multi-armed bandit (MAB) problem, where each arm represents a candidate sub-agent and the reward measures its helpfulness when collaborating with others. This framework, termed Bandit Optimization for Agent Design (BOAD), enables efficient exploration of sub-agent designs under limited evaluation budgets. On SWE-bench-Verified, BOAD outperforms single-agent and manually designed multi-agent systems. On SWE-bench-Live, featuring more recent and out-of-distribution issues, our 36B system ranks second on the leaderboard at the time of evaluation, surpassing larger models such as GPT-4 and Claude. These results demonstrate that automatically discovered hierarchical multi-agent systems significantly improve generalization on challenging long-horizon SWE tasks. Code is available at https://github.com/iamxjy/BOAD-SWE-Agent.
Problem

Research questions and friction points this paper is trying to address.

Automatically discovers effective hierarchical multi-agent systems for software engineering
Addresses poor generalization of single-agent LLMs on long-horizon, out-of-distribution software tasks
Formulates hierarchy discovery as a multi-armed bandit problem to optimize sub-agent collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical multi-agent system for software engineering tasks
Bandit optimization to discover effective sub-agent hierarchies
Specialized sub-agents for localization, editing, and validation