🤖 AI Summary
To address the low quality and weak evaluation capability of automatically generated unit tests, this paper proposes UTRL, a framework based on adversarial reinforcement learning that jointly trains two large language models (e.g., Qwen3-4B) in complementary roles: one as a test generator and the other as a discriminator. The discriminator employs a novel reward function integrating code execution feedback and defect detection capability to guide iterative refinement of test cases. Compared to supervised fine-tuning and GPT-4.1, UTRL achieves significant improvements in both fault detection rate and test assessment accuracy, with generated tests approaching human-written quality. This work is the first to introduce an adversarial co-evolutionary mechanism into LLM-driven unit test generation, enabling joint optimization of test generation and evaluation capabilities. It establishes a new paradigm for trustworthy verification of AI-generated code.
📝 Abstract
Unit testing is a core practice in programming, enabling systematic evaluation of programs produced by human developers or large language models (LLMs). Given the challenges in writing comprehensive unit tests, LLMs have been employed to automate test generation, yet methods for training LLMs to produce high-quality tests remain underexplored. In this work, we propose UTRL, a novel reinforcement learning framework that trains an LLM to generate high-quality unit tests given a programming instruction. Our key idea is to iteratively train two LLMs, the unit test generator and the code generator, in an adversarial manner via reinforcement learning. The unit test generator is trained to maximize a discrimination reward, which reflects its ability to produce tests that expose faults in the code generator's solutions, and the code generator is trained to maximize a code reward, which reflects its ability to produce solutions that pass the unit tests generated by the test generator. In our experiments, we demonstrate that unit tests generated by Qwen3-4B trained via UTRL show higher quality compared to unit tests generated by the same model trained via supervised fine-tuning on human-written ground-truth unit tests, yielding code evaluations that more closely align with those induced by the ground-truth tests. Moreover, Qwen3-4B trained with UTRL outperforms frontier models such as GPT-4.1 in generating high-quality unit tests, highlighting the effectiveness of UTRL in training LLMs for this task.