π€ AI Summary
This work addresses the steep learning curve of Rust and the lack of automated repair capabilities in existing large language model agents for repository-scale issues. We introduce Rust-SWE-bench, the first benchmark comprising 500 real-world Rust tasks, and systematically evaluate mainstream agents, revealing their deficiencies in understanding code structure and adhering to Rustβs type semantics. To overcome these limitations, we propose RUSTFORGER, a novel approach that integrates ReAct-style agents, large language models (e.g., Claude Sonnet 3.7), Rust metaprogramming, dynamic tracing, and an automated testing environment. On Rust-SWE-bench, RUSTFORGER achieves a repair rate of 28.6%, outperforming the strongest baseline by 34.9% and uniquely solving 46 tasks that other methods fail to address.
π Abstract
The Rust programming language presents a steep learning curve and significant coding challenges, making the automation of issue resolution essential for its broader adoption. Recently, LLM-powered code agents have shown remarkable success in resolving complex software engineering tasks, yet their application to Rust has been limited by the absence of a large-scale, repository-level benchmark. To bridge this gap, we introduce Rust-SWE-bench, a benchmark comprising 500 real-world, repository-level software engineering tasks from 34 diverse and popular Rust repositories. We then perform a comprehensive study on Rust-SWE-bench with four representative agents and four state-of-the-art LLMs to establish a foundational understanding of their capabilities and limitations in the Rust ecosystem. Our extensive study reveals that while ReAct-style agents are promising, i.e., resolving up to 21.2% of issues, they are limited by two primary challenges: comprehending repository-wide code structure and complying with Rust's strict type and trait semantics. We also find that issue reproduction is rather critical for task resolution. Inspired by these findings, we propose RUSTFORGER, a novel agentic approach that integrates an automated test environment setup with a Rust metaprogramming-driven dynamic tracing strategy to facilitate reliable issue reproduction and dynamic analysis. The evaluation shows that RUSTFORGER using Claude-Sonnet-3.7 significantly outperforms all baselines, resolving 28.6% of tasks on Rust-SWE-bench, i.e., a 34.9% improvement over the strongest baseline, and, in aggregate, uniquely solves 46 tasks that no other agent could solve across all adopted advanced LLMs.