SWE-PolyBench: A multi-language benchmark for repository level evaluation of coding agents

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code generation agents lack a unified, cross-language evaluation benchmark grounded in real-world software repositories. Method: We introduce SWE-PolyBench—the first software engineering benchmark supporting multiple languages (Java, JavaScript, TypeScript, Python), built on real open-source repositories and incorporating execution-based validation across 2,110 tasks. We further construct SWE-PolyBench500, a hierarchically sampled subset. Our approach innovates with AST-driven fine-grained evaluation metrics, a fully automated end-to-end evaluation framework, and a cross-language semantic consistency verification mechanism. Contribution/Results: Systematic evaluation of leading open-source coding agents on SWE-PolyBench reveals critical bottlenecks—including pronounced cross-lingual performance disparities and sharp accuracy degradation on complex tasks such as code refactoring—for the first time. SWE-PolyBench establishes a reproducible, scalable, and high-fidelity evaluation standard for AI-powered programming assistants.

Technology Category

Application Category

📝 Abstract
Coding agents powered by large language models have shown impressive capabilities in software engineering tasks, but evaluating their performance across diverse programming languages and real-world scenarios remains challenging. We introduce SWE-PolyBench, a new multi-language benchmark for repository-level, execution-based evaluation of coding agents. SWE-PolyBench contains 2110 instances from 21 repositories and includes tasks in Java (165), JavaScript (1017), TypeScript (729) and Python (199), covering bug fixes, feature additions, and code refactoring. We provide a task and repository-stratified subsample (SWE-PolyBench500) and release an evaluation harness allowing for fully automated evaluation. To enable a more comprehensive comparison of coding agents, this work also presents a novel set of metrics rooted in syntax tree analysis. We evaluate leading open source coding agents on SWE-PolyBench, revealing their strengths and limitations across languages, task types, and complexity classes. Our experiments show that current agents exhibit uneven performances across languages and struggle with complex problems while showing higher performance on simpler tasks. SWE-PolyBench aims to drive progress in developing more versatile and robust AI coding assistants for real-world software engineering. Our datasets and code are available at: https://github.com/amazon-science/SWE-PolyBench
Problem

Research questions and friction points this paper is trying to address.

Evaluating coding agents across diverse languages and real-world scenarios
Introducing a multi-language benchmark for repository-level coding agent evaluation
Assessing agent performance on bug fixes, feature additions, and refactoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-language benchmark for coding agents
Execution-based evaluation with diverse tasks
Novel metrics using syntax tree analysis
🔎 Similar Papers
No similar papers found.