Controlled Self-Evolution for Algorithmic Code Optimization

📅 2026-01-12
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a controllable self-evolution framework to address the limitations of existing methods, which under limited computational budgets suffer from initialization bias, undirected random operations without feedback, and insufficient exploitation of cross-task experience, thereby hindering efficient discovery of superior algorithmic code. The proposed approach integrates structurally diverse planning-based initialization, feedback-guided directional mutation and crossover, and a hierarchical evolutionary memory that synergistically combines both intra-task and cross-task experiences. Evaluated on the EffiBench-X benchmark, the method significantly outperforms current state-of-the-art techniques, demonstrates compatibility with multiple large language models, achieves high efficiency early in the evolution process, and consistently improves code performance over time.

Technology Category

Application Category

📝 Abstract
Self-evolution methods enhance code generation through iterative"generate-verify-refine"cycles, yet existing approaches suffer from low exploration efficiency, failing to discover solutions with superior complexity within limited budgets. This inefficiency stems from initialization bias trapping evolution in poor solution regions, uncontrolled stochastic operations lacking feedback guidance, and insufficient experience utilization across tasks. To address these bottlenecks, we propose Controlled Self-Evolution (CSE), which consists of three key components. Diversified Planning Initialization generates structurally distinct algorithmic strategies for broad solution space coverage. Genetic Evolution replaces stochastic operations with feedback-guided mechanisms, enabling targeted mutation and compositional crossover. Hierarchical Evolution Memory captures both successful and failed experiences at inter-task and intra-task levels. Experiments on EffiBench-X demonstrate that CSE consistently outperforms all baselines across various LLM backbones. Furthermore, CSE achieves higher efficiency from early generations and maintains continuous improvement throughout evolution. Our code is publicly available at https://github.com/QuantaAlpha/EvoControl.
Problem

Research questions and friction points this paper is trying to address.

self-evolution
code optimization
exploration efficiency
initialization bias
experience utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled Self-Evolution
Genetic Evolution
Diversified Planning Initialization
Hierarchical Evolution Memory
Algorithmic Code Optimization
🔎 Similar Papers
No similar papers found.