🤖 AI Summary
Existing defect generation methods suffer from out-of-distribution bias due to artificial perturbations and fail to emulate realistic developer error patterns. To address this, we propose a novel defect synthesis paradigm wherein SWE agents autonomously introduce complex, diverse, and human-like defects during the implementation of new functionality. Our approach integrates synthetic code edits, supervised fine-tuning, automated test-based feedback, and large language model training—thereby enhancing both the authenticity and utility of synthetic defect data. Experiments demonstrate that only 1.2K synthesized defects surpass the performance achieved by baseline models trained on 3K real-world samples: FrogBoss (32B) and FrogMini (14B) attain pass@1 scores of 54.6% and 45.3%, respectively, on standard benchmarks. These results validate our method’s superior data efficiency and generalization capability.
📝 Abstract
High quality bugs are key to training the next generation of language model based software engineering (SWE) agents. We introduce a novel method for synthetic generation of difficult and diverse bugs. Our method instructs SWE Agents to introduce a feature into the codebase whereby they may unintentionally break tests, resulting in bugs. Prior approaches often induce an out-of-distribution effect by generating bugs intentionally (e.g. by introducing local perturbation to existing code), which does not reflect realistic development processes. We perform qualitative analysis to demonstrate that our approach for generating bugs more closely reflects the patterns found in human-authored edits. Through extensive experiments, we demonstrate that our bugs provide more efficient training data for supervised fine-tuning, outperforming other bug datasets by 2% with half the training data (1.2k vs. 3k bugs). We train on our newly generated bugs in addition to existing bug datasets to get FrogBoss a state-of-the-art 32B parameter model on SWE-bench Verified with a pass@1 of 54.6% and FrogMini a state-of-the-art 14B model on SWE-bench Verified with a pass@1 of 45.3% on SWE-bench Verified averaged over three seeds.