Agentic Code Optimization via Compiler-LLM Cooperation

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional compilers struggle to exploit optimization opportunities that require high-level semantic understanding, while large language models (LLMs), despite their generative capabilities, often introduce correctness errors. This work proposes a collaborative multi-agent optimization framework that integrates compiler-driven analysis with LLM-based code generation across multiple abstraction levels, complemented by automated test validation and dynamic resource scheduling. By rigorously enforcing correctness through verification mechanisms, the approach achieves consistent performance gains without compromising reliability. Evaluated on multiple benchmarks, the method significantly outperforms both conventional compilers and single-stage LLM-based optimization strategies, delivering up to a 1.25× speedup.
📝 Abstract
Generating performant executables from high level languages is critical to software performance across a wide range of domains. Modern compilers perform this task by passing code through a series of well-studied optimizations at progressively lower levels of abstraction, but may miss optimization opportunities that require high-level reasoning about a program's purpose. Recent work has proposed using LLMs to fill this gap. While LLMs can achieve large speedups on some programs, they frequently generate code that is incorrect. In this work, we propose a method to balance the correctness of conventional compiler optimizations with the ``creativity'' of LLM-based code generation: compiler-LLM cooperation. Our approach integrates existing compiler optimization passes with LLM-based code generation at multiple levels of abstraction, retaining the best features of both types of code optimization. We realize our approach with a multi-agent system that includes (1) LLM-based optimization agents for each level of abstraction, (2) individual compiler constituents as tools, (3) an LLM-based test generation agent that probes the correctness and performance of generated code, and (4) a guiding LLM that orchestrates the other components. The strategy enables LLM-based optimization of input programs at multiple levels of abstraction and introduces a method for distributing computational budget between levels. Our extensive evaluation shows that compiler-LLM cooperation outperforms both existing compiler optimizations and level-specific LLM-based baselines, producing speedups up to 1.25x.
Problem

Research questions and friction points this paper is trying to address.

compiler optimization
LLM-based code generation
code correctness
performance optimization
high-level reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compiler-LLM Cooperation
Agentic Code Optimization
Multi-agent System
Abstraction-aware Optimization
Correctness-Preserving LLM
🔎 Similar Papers
No similar papers found.
B
Benjamin Mikek
AWS AI, USA and Georgia Institute of Technology, USA
D
Danylo Vashchilenko
AWS AI, USA
B
Bryan Lu
AWS AI, USA
Panpan Xu
Panpan Xu
Principal Applied Scientist, AWS AI/ML