Agentic Auto-Scheduling: An Experimental Study of LLM-Guided Loop Optimization

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated optimization of complex nested loops on modern hardware remains challenging due to the combinatorial complexity of legal and profitable loop transformations. Method: This paper proposes an LLM-guided closed-loop compilation optimization framework that leverages a general-purpose large language model—without fine-tuning or in-context examples—as an intelligent agent. Grounded in the polyhedral model, the LLM generates loop transformation schedules, which are iteratively refined using real-time compiler feedback on both performance speedup and semantic correctness. Contribution/Results: To our knowledge, this is the first zero-shot, feedback-driven autonomous scheduling approach, introducing the embodied intelligence paradigm into compiler optimization. Evaluated on the PolyBench benchmark, it achieves an average 2.66× speedup per run and up to 3.54× after five iterations—substantially outperforming state-of-the-art tools such as Pluto—demonstrating the feasibility and superiority of LLM–compiler co-optimization for efficient, reliable automatic loop optimization.

Technology Category

Application Category

📝 Abstract
Automatic code optimization remains a difficult challenge, particularly for complex loop nests on modern hardware. This paper investigates a novel approach to code optimization where Large Language Models (LLMs) guide the process through a closed-loop interaction with a compiler. We present ComPilot, an experimental framework that leverages off-the-shelf LLMs, without any task-specific fine-tuning, as interactive optimization agents. ComPilot establishes a feedback loop where an LLM proposes transformations for a given loop nest to a compiler. The compiler attempts the transformations, reporting back legality status and measured speedup or slowdown. The LLM utilizes this concrete feedback to iteratively refine its optimization strategy. Our extensive evaluation across the PolyBench benchmark suite demonstrates the effectiveness of this zero-shot approach. ComPilot achieves geometric mean speedups of 2.66x (single run) and 3.54x (best-of-5 runs) over the original code. Furthermore, ComPilot demonstrates competitive performance against the state-of-the-art Pluto polyhedral optimizer, outperforming it in many cases. This experimental study demonstrates that general-purpose LLMs can effectively guide the code optimization process when grounded by compiler feedback, opening promising research directions for agentic AI in code optimization.
Problem

Research questions and friction points this paper is trying to address.

LLMs guide loop optimization via compiler feedback loop
Zero-shot approach achieves speedups over original code
Competes with state-of-the-art polyhedral optimizer performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs guide code optimization via closed-loop compiler interaction
Compiler feedback refines LLM optimization strategies iteratively
Zero-shot approach achieves competitive performance against state-of-the-art
M
Massinissa Merouani
New York University Abu Dhabi, Abu Dhabi, UAE
I
Islem Kara Bernou
New York University Abu Dhabi, Abu Dhabi, UAE
Riyadh Baghdadi
Riyadh Baghdadi
Assistant Professor (NYUAD); Global Network Assistant Professor (NYU); Research Affiliate (MIT)
CompilersMachine LearningAutomatic optimizationDeep learning frameworks