🤖 AI Summary
SWE-Bench’s manually written test suites suffer from insufficient coverage, leading to widespread false positives—erroneous patches incorrectly labeled as “passing.”
Method: We propose UTBoost, a framework that first systematically identifies 345 such mislabeled patches; then introduces UTGenerator, an LLM-driven, context-aware unit test generation method integrating static dependency analysis, multi-stage test synthesis, and execution-based validation to automate test augmentation for real-world Python projects.
Results: UTBoost corrects evaluation flaws in 36 task instances, substantially improving benchmark reliability: it alters rankings in 40.9% of SWE-Bench Lite and 24.4% of SWE-Bench Verified entries, inducing 18 and 11 rank changes, respectively. This work is the first to systematically expose and mitigate test coverage bias in code generation evaluation.
📝 Abstract
The advent of Large Language Models (LLMs) has spurred the development of coding agents for real-world code generation. As a widely used benchmark for evaluating the code generation capabilities of these agents, SWE-Bench uses real-world problems based on GitHub issues and their corresponding pull requests. However, the manually written test cases included in these pull requests are often insufficient, allowing generated patches to pass the tests without resolving the underlying issue. To address this challenge, we introduce UTGenerator, an LLM-driven test case generator that automatically analyzes codebases and dependencies to generate test cases for real-world Python projects. Building on UTGenerator, we propose UTBoost, a comprehensive framework for test case augmentation. In our evaluation, we identified 36 task instances with insufficient test cases and uncovered 345 erroneous patches incorrectly labeled as passed in the original SWE Bench. These corrections, impacting 40.9% of SWE-Bench Lite and 24.4% of SWE-Bench Verified leaderboard entries, yield 18 and 11 ranking changes, respectively.