EvoSyn: Generalizable Evolutionary Data Synthesis for Verifiable Learning

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing synthetic data methods suffer from severe hallucination, weak validation criteria, and poor cross-task generalization, relying heavily on task-specific heuristics or post-hoc filtering—lacking principled, verifiable evaluation. This paper proposes a task-agnostic evolutionary synthesis framework: starting from minimal seed supervision, it jointly generates problems, diverse candidate solutions, and executable verification programs (artifacts), aligning generation and verification via a consistency-driven evaluation loop. Crucially, it eliminates hand-crafted rules and achieves, for the first time, alignment between strategy-based generation and human-annotated verification. The method integrates evolutionary algorithms, executable verification, strategy-guided generation, consistency-based assessment, reinforcement learning, and model distillation. Evaluated on LiveCodeBench and AgentBench-OS, it achieves significant performance gains, demonstrating strong generalization across complex reasoning and agent-oriented tasks.

Technology Category

Application Category

📝 Abstract
Reliable verifiable data has become a key driver of capability gains in modern language models, enabling stable reinforcement learning with verifiable rewards and effective distillation that transfers competence across math, coding, and agentic tasks. Yet constructing generalizable synthetic verifiable data remains difficult due to hallucination-prone generation, and weak or trivial verification artifacts that fail to separate strong from weak solutions. Existing approaches often rely on task-specific heuristics or post-hoc filters that do not transfer across domains and lack a principled, universal evaluator of verifiability. In this work, we introduce an evolutionary, task-agnostic, strategy-guided, executably-checkable data synthesis framework that, from minimal seed supervision, jointly synthesizes problems, diverse candidate solutions, and verification artifacts, and iteratively discovers strategies via a consistency-based evaluator that enforces agreement between human-annotated and strategy-induced checks. This pipeline upgrades filtering into principled synthesis: it reliably assembles coherent, verifiable training instances and generalizes without domain-specific rules. Our experiments demonstrate the effectiveness of the proposed approach under both RLVR and model distillation training paradigms. The results show that training with our synthesized data yields significant improvements on both the LiveCodeBench and AgentBench-OS tasks, highlighting the robust generalization of our framework.
Problem

Research questions and friction points this paper is trying to address.

Developing generalizable synthetic verifiable data for learning
Overcoming hallucination-prone generation and weak verification artifacts
Creating task-agnostic evolutionary synthesis with executable verification checks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary task-agnostic framework synthesizes verifiable data
Jointly generates problems solutions and verification artifacts
Iteratively discovers strategies via consistency-based evaluator
🔎 Similar Papers
No similar papers found.