🤖 AI Summary
Existing compiler optimization techniques suffer from inflexible fixed pipelines, low search efficiency, and poor generalization—hindering cross-language and cross-program transfer. Method: We propose the first generalizable “compiler world model,” framing optimization as a dynamic system of state evolution and transition behavior, decoupling environment simulation from policy learning. Our approach integrates model-based reinforcement learning, program representation learning, and large-scale compilation trajectory modeling, underpinned by an LLVM IR–based simulation environment. Contribution/Results: On the CompilerGym benchmark, our method achieves zero-shot superiority over LLVM’s default optimizations and prior state-of-the-art approaches. It delivers substantial end-to-end performance gains and reduces value prediction error by 37%. Crucially, it is the first to demonstrate zero-shot generalization across both programming languages and diverse programs—marking a foundational advance in adaptive, portable compiler optimization.
📝 Abstract
Effective code optimization in compilers is crucial for computer and software engineering. The success of these optimizations primarily depends on the selection and ordering of the optimization passes applied to the code. While most compilers rely on a fixed sequence of optimization passes, current methods to find the optimal sequence either employ impractically slow search algorithms or learning methods that struggle to generalize to code unseen during training. We introduce CompilerDream, a model-based reinforcement learning approach to general code optimization. CompilerDream comprises a compiler world model that accurately simulates the intrinsic properties of optimization passes and an agent trained on this model to produce effective optimization strategies. By training on a large-scale program dataset, CompilerDream is equipped to serve as a general code optimizer across various application scenarios and source-code languages. Our extensive experiments first highlight CompilerDream's strong optimization capabilities for autotuning, where it leads the CompilerGym leaderboard. More importantly, the zero-shot generalization ability of large-scale trained compiler world model and agent, excels across diverse datasets, surpassing LLVM's built-in optimizations and other state-of-the-art methods in both settings of value prediction and end-to-end code optimization.