TENET: Leveraging Tests Beyond Validation for Code Generation

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three core challenges in test-driven development (TDD)–enabled LLM code generation—test suite minimization, context-aware retrieval, and test-feedback-guided iterative refinement—this paper introduces the first repository-scale test-driven code generation agent framework. Our method features a lightweight test selection mechanism, a customized tool-augmented context retrieval module, and a multi-round refinement pipeline grounded in failure analysis and reflective reasoning. Crucially, it treats test cases as executable specifications, enabling closed-loop generation and optimization. Evaluated on RepoCod and RepoEval benchmarks, our framework achieves Pass@1 scores of 69.08% and 81.77%, outperforming the prior state-of-the-art agent methods by +9.49 and +2.17 percentage points, respectively. This work provides the first systematic empirical evidence of the structural impact of real-world repository-level test contexts on LLM code generation performance.

Technology Category

Application Category

📝 Abstract
Test-Driven Development (TDD) is a widely adopted software engineering practice that requires developers to create and execute tests alongside code implementation, ensuring that software behavior is continuously validated and refined. In the era of vibe coding, where developers increasingly delegate code writing to large language models (LLMs) by specifying high-level intentions, TDD becomes even more crucial, as test cases serve as executable specifications that explicitly define and verify intended functionality beyond what natural-language descriptions and code context can convey. While vibe coding under TDD is promising, there are three main challenges: (1) selecting a small yet effective test suite to improve the generation accuracy and control the execution workload, (2) retrieving context such as relevant code effectively, and (3) systematically using test feedback for effective code refinement. To address these challenges, we introduce TENET, an LLM agent for generating functions in complex real-world repositories under the TDD setting. TENET features three components: (1) a novel test harness mechanism that selects a concise test suite to maximize diversity of target usage scenarios; (2) a tailored agent toolset that performs efficient retrieval of relevant code with interactive debugging; and (3) a reflection-based refinement workflow that iteratively analyzes failures, replenishes context, and applies code refinement. TENET achieves 69.08% and 81.77% Pass@1 on RepoCod and RepoEval benchmarks, outperforming the best agentic baselines by 9.49 and 2.17 percentage points, respectively. In addition, this is the first study of test-driven code generation with repository-level context, examining how different aspects of test suites affect the performance of LLM agents under the TDD setting.
Problem

Research questions and friction points this paper is trying to address.

Selecting effective test suites to improve generation accuracy
Retrieving relevant code context efficiently for debugging
Systematically using test feedback for iterative code refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test harness selects concise diverse test suite
Toolset retrieves relevant code with debugging
Reflection workflow iteratively refines code using tests
🔎 Similar Papers
No similar papers found.
Y
Yiran Hu
Computer Science Department, Purdue University, IN 47906, USA
N
Nan Jiang
Microsoft Office AI, work done independently of employer
Shanchao Liang
Shanchao Liang
PhD student at Purdue University
Y
Yi Wu
Computer Science Department, Purdue University, IN 47906, USA
Lin Tan
Lin Tan
Mary J. Elmore New Frontiers Professor, Computer Science, Purdue University
LLM4CodeSoftware reliabilityAIText analyticsAutoformalization