π€ AI Summary
This work addresses the limited global perspective of open-source large language models (LLMs) in automated test generation, which hinders their ability to efficiently produce test cases with high marginal utility. The authors formulate test suite generation as a Markov decision process and, for the first time, integrate monotone submodular optimization theory with reinforcement learning to recast the problem as a tractable sequential greedy optimization task. They fine-tune an open-source LLM to serve as a neural greedy expert for sequential test case generation. This approach substantially overcomes the modelβs myopic limitations, achieving significant improvements on the ULT benchmark: branch coverage increases by 38.15β52.37%, execution pass rates rise by 298.22β558.88%, and defect detection improves by 58.43β95.45%. Notably, the method enables a 7B-parameter model to approach the performance of GPT-5.2.
π Abstract
With the rapid evolution of LLMs, automated software testing is witnessing a paradigm shift. While proprietary models like GPT-4o demonstrate impressive capabilities, their high deployment costs and data privacy concerns make open-source LLMs the practical imperative for many academic and industrial scenarios. In the field of automated test generation, it has evolved to iterative workflows to construct test suites based on LLMs. When utilizing open-source LLMs, we empirically observe they lack a suite-level perspective, suffering from structural myopia-failing to generate new tests with large marginal gain based on the current covered status. In this paper, from the perspective of sequences, we formalize test suite generation as a MDP and demonstrate that its objective exhibits monotone submodularity, which enables an effective relaxation of this NP-hard global optimization into a tractable step-wise greedy procedure. Guided by this insight, we propose TestDecision, which transforms LLMs into neural greedy experts. TestDecision consists of two synergistic components: (1) an inference framework which implements test suite construction following a step-wise greedy strategy; and (2) a training pipeline of reinforcement learning which equips the base LLM with sequential test generation ability to maximize marginal gain. Comprehensive evaluations on the ULT benchmark demonstrate that TestDecision significantly outperforms existing advanced methods. It brings an improvement between 38.15-52.37% in branch coverage and 298.22-558.88% in execution pass rate over all base models, achieving a comparable performance on 7B backbone with a much larger proprietary LLM GPT-5.2. Furthermore, TestDecision can find 58.43-95.45% more bugs than vanilla base LLMs and exhibit superior generalization on LiveCodeBench, proving its capability to construct high-quality test suites.