Propose, Solve, Verify: Self-Play Through Formal Verification

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing unit-test-based self-play code generation methods suffer from insufficient test coverage and error propagation, hindering purely unsupervised training. Method: We propose PSV—a novel framework that replaces brittle unit tests with formal verification as a robust reward signal; integrates difficulty-aware synthetic problem generation with expert iteration–driven self-play to enable end-to-end, human-annotation-free code model optimization. Contribution/Results: The resulting PSV-Verus model achieves up to a 9.6× improvement in pass@1 over prior methods on HumanEval, MBPP, and CodeContests—three canonical code-generation benchmarks. Crucially, performance scales stably with both problem size and training iterations, empirically validating the effectiveness and scalability of formal-verification–guided self-play for code synthesis.

Technology Category

Application Category

📝 Abstract
Training models through self-play alone (without any human data) has been a longstanding goal in AI, but its effectiveness for training large language models remains unclear, particularly in code generation where rewards based on unit tests are brittle and prone to error propagation. We study self-play in the verified code generation setting, where formal verification provides reliable correctness signals. We introduce Propose, Solve, Verify (PSV) a simple self-play framework where formal verification signals are used to create a proposer capable of generating challenging synthetic problems and a solver trained via expert iteration. We use PSV to train PSV-Verus, which across three benchmarks improves pass@1 by up to 9.6x over inference-only and expert-iteration baselines. We show that performance scales with the number of generated questions and training iterations, and through ablations identify formal verification and difficulty-aware proposal as essential ingredients for successful self-play.
Problem

Research questions and friction points this paper is trying to address.

Self-play training without human data for code generation
Using formal verification for reliable correctness signals
Creating a proposer and solver via expert iteration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-play framework using formal verification signals
Proposer generates challenging synthetic problems
Solver trained via expert iteration with verification
🔎 Similar Papers
No similar papers found.