🤖 AI Summary
This study addresses the prevalent overestimation of advanced code evolution methods due to insufficient comparison with simple baselines. Through systematic evaluation across three distinct tasks—mathematical reasoning, agent framework design, and machine learning competitions—the authors compare state-of-the-art evolutionary approaches against straightforward baselines using large language model–based code generation, prompt engineering, rigorous benchmarking, and statistical analysis. The results demonstrate that baseline methods consistently match or surpass complex evolutionary techniques across all tasks. The findings indicate that performance bottlenecks primarily stem from limitations in search space design and domain-specific knowledge embedded in prompts, rather than deficiencies in the evolutionary algorithms themselves. To mitigate evaluation bias and methodological redundancy in existing literature, the work proposes a more robust assessment paradigm that accounts for stochasticity and ensures fairer comparisons.
📝 Abstract
Code evolution is a family of techniques that rely on large language models to search through possible computer programs by evolving or mutating existing code. Many proposed code evolution pipelines show impressive performance but are often not compared to simpler baselines. We test how well two simple baselines do over three domains: finding better mathematical bounds, designing agentic scaffolds, and machine learning competitions. We find that simple baselines match or exceed much more sophisticated methods in all three. By analyzing these results we find various shortcomings in how code evolution is both developed and used. For the mathematical bounds, a problem's search space and domain knowledge in the prompt are chiefly what dictate a search's performance ceiling and efficiency, with the code evolution pipeline being secondary. Thus, the primary challenge in finding improved bounds is designing good search spaces, which is done by domain experts, and not the search itself. When designing agentic scaffolds we find that high variance in the scaffolds coupled with small datasets leads to suboptimal scaffolds being selected, resulting in hand-designed majority vote scaffolds performing best. We propose better evaluation methods that reduce evaluation stochasticity while keeping the code evolution economically feasible. We finish with a discussion of avenues and best practices to enable more rigorous code evolution in future work.