AVO: Agentic Variation Operators for Autonomous Evolutionary Search

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional evolutionary search, which relies on fixed mutation operators and handcrafted heuristics and thus struggles to autonomously discover high-performance attention kernels. The authors propose an Autonomous Variation Operator (AVO), which elevates large language models from mere candidate generators to agent-like mutation mechanisms capable of self-proposing, repairing, critiquing, and validating kernel designs. By integrating lineage information, a domain-specific knowledge base, and an execution-feedback loop, AVO enables continuous autonomous evolution of attention kernels. Within seven days on an NVIDIA B200 GPU, the method discovers kernels outperforming cuDNN by 3.5% and FlashAttention-4 by 10.5%. When transferred to grouped-query attention, it achieves performance gains of up to 7.0% over cuDNN and 9.3% over FlashAttention-4 within just 30 minutes.

Technology Category

Application Category

📝 Abstract
Agentic Variation Operators (AVO) are a new family of evolutionary variation operators that replace the fixed mutation, crossover, and hand-designed heuristics of classical evolutionary search with autonomous coding agents. Rather than confining a language model to candidate generation within a prescribed pipeline, AVO instantiates variation as a self-directed agent loop that can consult the current lineage, a domain-specific knowledge base, and execution feedback to propose, repair, critique, and verify implementation edits. We evaluate AVO on attention, among the most aggressively optimized kernel targets in AI, on NVIDIA Blackwell (B200) GPUs. Over 7 days of continuous autonomous evolution on multi-head attention, AVO discovers kernels that outperform cuDNN by up to 3.5% and FlashAttention-4 by up to 10.5% across the evaluated configurations. The discovered optimizations transfer readily to grouped-query attention, requiring only 30 minutes of additional autonomous adaptation and yielding gains of up to 7.0% over cuDNN and 9.3% over FlashAttention-4. Together, these results show that agentic variation operators move beyond prior LLM-in-the-loop evolutionary pipelines by elevating the agent from candidate generator to variation operator, and can discover performance-critical micro-architectural optimizations that produce kernels surpassing state-of-the-art expert-engineered attention implementations on today's most advanced GPU hardware.
Problem

Research questions and friction points this paper is trying to address.

evolutionary search
variation operators
attention kernels
autonomous optimization
GPU acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Variation Operators
Autonomous Evolutionary Search
Code Optimization
Attention Kernels
LLM-based Mutation
🔎 Similar Papers
No similar papers found.