Co-Evolving Agents: Learning from Failures as Hard Negatives

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In real-world settings, high-quality task-specific data is scarce, causing agents to overfit by excessively relying on limited expert demonstrations. Method: This paper proposes a Coevolutionary Agent Framework that introduces a collaborative evolution mechanism between a target agent and a failure agent. The failure agent actively generates “near-success but ultimately failing” hard negative samples, transforming structured failures into discriminative supervisory signals. Integrated with preference optimization, self-generated trajectory sampling, and hard negative mining, the framework enables interactive, inter-agent improvement. Results: It significantly outperforms supervised fine-tuning across multiple benchmarks, demonstrating markedly improved generalization. Core contribution: This work is the first to model controllable failure as a principled source of negative samples—breaking the traditional self-improvement paradigm’s exclusive reliance on positive trajectories—and thereby effectively mitigates overfitting while enhancing decision-boundary learning.

Technology Category

Application Category

📝 Abstract
The rapid progress of large foundation models has accelerated the development of task-specialized agents across diverse domains. However, the effectiveness of agents remains tightly coupled with the quality of training data, while curating task-specific datasets remains costly and often infeasible in real-world scenarios. Recent work has explored self-improving agents that autonomously generate, refine, and re-train on their own trajectories. A prominent line of approaches further leverages preference optimization by pairing predicted trajectories with scarce ground-truth trajectories, enabling agents to learn directly from their own failures. While these methods outperform supervised fine-tuning, their heavy reliance on predicted trajectories under limited ground-truth supervision leaves them prone to overfitting. To address this, we propose a co-evolving agents framework in which a target agent improves jointly with an auxiliary failure agent. The failure agent learns through preference optimization over failure trajectories from both the target and itself, thereby generating hard negatives that are close to success yet remain failures. Incorporating these informative hard negatives into the target agent's optimization sharpens decision boundaries and enhances generalization. Our comprehensive analysis and experiments across benchmark datasets show that our method not only shows improved performance but also demonstrates that failures, instead of being used as-is, can be systematically transformed into structured and valuable learning signals in self-improving agents.
Problem

Research questions and friction points this paper is trying to address.

Addresses overfitting in self-improving agents from limited supervision
Proposes co-evolving agents to generate hard negative learning signals
Transforms failures into structured data for enhanced generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Co-evolving target and failure agents for joint improvement
Failure agent generates hard negatives via preference optimization
Hard negatives sharpen decision boundaries and enhance generalization
🔎 Similar Papers
No similar papers found.