ImprovEvolve: Ask AlphaEvolve to Improve the Input Solution and Then Improvise

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel large language model (LLM)-guided evolutionary computing paradigm for tackling complex mathematical construction and optimization problems. By reparameterizing evolutionary programs into a modular architecture featuring initialization, refinement, and controllable perturbation mechanisms, the approach substantially reduces the cognitive load on the LLM while enhancing both search efficiency and solution quality. Integrating program synthesis with an iterative optimization scheduling strategy, the method establishes new state-of-the-art results for hexagonal packing with 11, 12, 15, and 16 hexagons; further manual fine-tuning improves outcomes for configurations of 14, 17, and 23 hexagons. Additionally, it achieves a new lower bound of 0.96258 for the second-order autocorrelation inequality, demonstrating its effectiveness and superiority.

Technology Category

Application Category

📝 Abstract
Recent advances in LLM-guided evolutionary computation, particularly AlphaEvolve, have demonstrated remarkable success in discovering novel mathematical constructions and solving challenging optimization problems. In this article, we present ImprovEvolve, a simple yet effective technique for enhancing LLM-based evolutionary approaches such as AlphaEvolve. Given an optimization problem, the standard approach is to evolve program code that, when executed, produces a solution close to the optimum. We propose an alternative program parameterization that maintains the ability to construct optimal solutions while reducing the cognitive load on the LLM. Specifically, we evolve a program (implementing, e.g., a Python class with a prescribed interface) that provides the following functionality: (1) propose a valid initial solution, (2) improve any given solution in terms of fitness, and (3) perturb a solution with a specified intensity. The optimum can then be approached by iteratively applying improve() and perturb() with a scheduled intensity. We evaluate ImprovEvolve on challenging problems from the AlphaEvolve paper: hexagon packing in a hexagon and the second autocorrelation inequality. For hexagon packing, the evolved program achieves new state-of-the-art results for 11, 12, 15, and 16 hexagons; a lightly human-edited variant further improves results for 14, 17, and 23 hexagons. For the second autocorrelation inequality, the human-edited program achieves a new state-of-the-art lower bound of 0.96258, improving upon AlphaEvolve's 0.96102.
Problem

Research questions and friction points this paper is trying to address.

LLM-guided evolutionary computation
optimization problems
program parameterization
cognitive load
solution improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

ImprovEvolve
LLM-guided evolution
program parameterization
solution improvement
evolutionary computation
🔎 Similar Papers
No similar papers found.