🤖 AI Summary
Automated test generation for code repair remains underexplored, lacking realistic, end-to-end evaluation benchmarks. Method: We introduce TestBench—the first benchmark built on real-world GitHub issue reports, corresponding fix patches, and manually validated tests—enabling holistic assessment of code intelligence agents. We design an LLM-driven agent that automatically formalizes user-reported issues into executable tests and propose a fine-grained dual-metric evaluation framework based on *issue reproduction rate* and *coverage change*. Further, we innovate a test-driven repair filtering mechanism to prioritize high-fidelity fixes. Contribution/Results: Empirical evaluation shows that repair-oriented agents substantially outperform dedicated test-generation models; our filtering mechanism doubles SWE-Agent’s repair accuracy (100% improvement); generated tests exhibit high relevance to the original issue and strong defect-reproduction capability. TestBench is publicly released to support reproducible, standardized evaluation of code intelligence agents.
📝 Abstract
Rigorous software testing is crucial for developing and maintaining high-quality code, making automated test generation a promising avenue for both improving software quality and boosting the effectiveness of code generation methods. However, while code generation with Large Language Models (LLMs) is an extraordinarily active research area, test generation remains relatively unexplored. We address this gap and investigate the capability of LLM-based Code Agents to formalize user issues into test cases. To this end, we propose a novel benchmark based on popular GitHub repositories, containing real-world issues, ground-truth bug-fixes, and golden tests. We find that LLMs generally perform surprisingly well at generating relevant test cases, with Code Agents designed for code repair exceeding the performance of systems designed specifically for test generation. Further, as test generation is a similar but more structured task than code generation, it allows for a more fine-grained analysis using issue reproduction rate and coverage changes, providing a dual metric for analyzing systems designed for code repair. Finally, we find that generated tests are an effective filter for proposed code fixes, doubling the precision of SWE-Agent. We release all data and code at https://github.com/logic-star-ai/SWT-Bench