🤖 AI Summary
This study investigates the feasibility and limitations of AI-driven code optimization. We systematically evaluate three traditional compilers (GCC, LLVM, CETUS) against two large language models (LLMs)—CodeLlama-70B and DeepSeek-Coder—across performance and functional correctness. To this end, we introduce the first LLM-specific benchmark and automated verification framework for compiler optimizations, enabling multi-dimensional joint assessment of speedup and correctness. We further propose compilation-strategy-embedded prompting techniques—DIP (Domain-Informed Prompting) and CoT (Chain-of-Thought)—to enhance LLM reasoning for optimization tasks. Experimental results show that CodeLlama-70B achieves a 1.75× speedup on average, surpassing the best-performing compiler CETUS (1.67×); however, it exhibits high error rates on large-scale code, underscoring the necessity of rigorous verification. Our core contributions include: (1) a reproducible evaluation methodology for LLM-based compilation, (2) empirical evidence of prompt engineering’s critical role in LLM-driven optimization, and (3) the establishment of an “LLM + formal verification” co-optimization paradigm.
📝 Abstract
Traditional optimizing compilers have played an important role in adapting to the growing complexity of modern software systems. The need for efficient parallel programming in current architectures requires strong optimization techniques. The beginning of Large Language Models (LLMs) raises intriguing questions about the potential of these AI approaches to revolutionize code optimization methodologies. This work aims to answer an essential question for the compiler community:"Can AI-driven models revolutionize the way we approach code optimization?". To address this question, we present a comparative analysis between three classical optimizing compilers and two recent large language models, evaluating their respective abilities and limitations in optimizing code for maximum efficiency. In addition, we introduce a benchmark suite of challenging optimization patterns and an automatic mechanism for evaluating the performance and correctness of the code generated by LLMs. We used three different prompting strategies to evaluate the performance of the LLMs, Simple Instruction (IP), Detailed Instruction Prompting (DIP), and Chain of Thought (CoT). A key finding is that while LLMs have the potential to outperform current optimizing compilers, they often generate incorrect code on large code sizes, calling for automated verification methods. In addition, expressing a compiler strategy as part of the LLMs prompt substantially improves its overall performance. Our evaluation across three benchmark suites shows CodeLlama-70B as the superior LLM, capable of achieving speedups of up to x1.75. Additionally, CETUS is the best among the current optimizing compilers, achieving a maximum speedup of 1.67x. We also found substantial differences among the three prompting strategies.