Does SWE-Bench-Verified Test Agent Ability or Model Memory?

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
SWE-Bench-Verified exhibits severe data contamination risk, threatening its validity as a benchmark for assessing real-world software engineering reasoning—high scores may reflect memorization of training data rather than genuine problem-solving capability. Method: We introduce the first zero-context file localization experiment to systematically expose data leakage; further design controlled prompting experiments using Claude, cross-benchmark evaluation (against BeetleBox and SWE-rebench), and a minimal-context file identification task to quantify contamination extent. Contribution/Results: Models achieve 3× higher file localization accuracy and 6× higher edited-file identification rate on SWE-Bench-Verified versus baselines—demonstrating substantial contamination and severely compromised evaluation validity. This work provides the first empirical evidence of SWE-Bench-Verified’s unreliability and proposes a reproducible methodological framework for decontaminated benchmark design in software engineering agent evaluation.

Technology Category

Application Category

📝 Abstract
SWE-Bench-Verified, a dataset comprising 500 issues, serves as a de facto benchmark for evaluating various large language models (LLMs) on their ability to resolve GitHub issues. But this benchmark may overlap with model training data. If that is true, scores may reflect training recall, not issue-solving skill. To study this, we test two Claude models that frequently appear in top-performing agents submitted to the benchmark. We ask them to find relevant files using only issue text, and then issue text plus file paths. We then run the same setup on BeetleBox and SWE-rebench. Despite both benchmarks involving popular open-source Python projects, models performed 3 times better on SWE-Bench-Verified. They were also 6 times better at finding edited files, without any additional context about the projects themselves. This gap suggests the models may have seen many SWE-Bench-Verified tasks during training. As a result, scores on this benchmark may not reflect an agent's ability to handle real software issues, yet it continues to be used in ways that can misrepresent progress and lead to choices that favour agents that use certain models over strong agent design. Our setup tests the localization step with minimal context to the extent that the task should be logically impossible to solve. Our results show the risk of relying on older popular benchmarks and support the shift toward newer datasets built with contamination in mind.
Problem

Research questions and friction points this paper is trying to address.

Evaluates whether SWE-Bench-Verified measures genuine problem-solving or training data recall
Tests if benchmark scores reflect real software issue resolution versus model memory
Assesses contamination risk in benchmarks and advocates for newer contamination-aware datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tested models using only issue text
Compared performance across different benchmarks
Highlighted training data contamination risks
🔎 Similar Papers
No similar papers found.