LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers

📅 2024-03-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing polyhedral compilers suffer from limited support for deep affine transformations and rely on oversimplified program assumptions—such as rectangular iteration domains and single-level loop nests—hindering automatic selection of high-benefit schedules and impairing generality. This paper presents the first deep learning–driven clustering auto-scheduler designed for large-scale affine transformation spaces and complex program structures, including non-rectangular iteration domains and multi-level nested loops. Our approach integrates deep learning–based cost modeling, clustering-aware dependence analysis, multi-stage transformation sequence search, and iteration domain normalization with feature encoding. Evaluated on the PolyBench benchmark suite, our scheduler achieves geometric mean speedups of 1.84× over Tiramisu and 1.42× over Pluto. It significantly enhances optimization capability and practical applicability for complex, real-world programs.

Technology Category

Application Category

📝 Abstract
While polyhedral compilers have shown success in implementing advanced code transformations, they still face challenges in selecting the ones that lead to the most profitable speedups. This has motivated the use of machine learning based cost models to guide the search for polyhedral optimizations. State-of-the-art polyhedral compilers have demonstrated a viable proof-of-concept of such an approach. While promising, this approach still faces significant limitations. State-of-the-art polyhedral compilers that use a deep learning cost model only support a small subset of affine transformations, limiting their ability to explore complex code transformations. Furthermore, their applicability does not scale beyond simple programs, thus excluding many program classes from their scope, such as those with non-rectangular iteration domains or multiple loop nests. These limitations significantly impact the generality of such compilers and autoschedulers and put into question the whole approach. In this paper, we introduce LOOPer, the first polyhedral autoscheduler that uses a deep learning based cost model and covers a large space of affine transformations and programs. LOOPer allows the optimization of an extensive set of programs while being effective at applying complex sequences of polyhedral transformations. We implement and evaluate LOOPer and show that it achieves competitive speedups over the state-of-the-art. On the PolyBench benchmarks, LOOPer achieves a geometric mean speedup of 1.84x over Tiramisu and 1.42x over Pluto, two state-of-the-art polyhedral autoschedulers.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal polyhedral transformations for speedups
Supporting complex affine transformations in compilers
Scaling optimization to non-rectangular and multi-loop programs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning cost model for polyhedral optimizations
Supports extensive affine transformations and programs
Achieves competitive speedups over state-of-the-art compilers
🔎 Similar Papers
No similar papers found.
M
Massinissa Merouani
New York University Abu Dhabi
K
Khaled Afif Boudaoud
New York University Abu Dhabi
Iheb Nassim Aouadj
Iheb Nassim Aouadj
Research engineer, New York University Abudhabi
N
Nassim Tchoulak
New York University Abu Dhabi
I
Islam Kara Bernou
New York University Abu Dhabi
H
Hamza Benyamina
New York University Abu Dhabi
F
F. B. Tayeb
École Nationale Supérieure d’Informatique
K
K. Benatchba
École Nationale Supérieure d’Informatique
Hugh Leather
Hugh Leather
University of Edinburgh
Machine LearningCompilers
Riyadh Baghdadi
Riyadh Baghdadi
Assistant Professor (NYUAD); Global Network Assistant Professor (NYU); Research Affiliate (MIT)
CompilersMachine LearningAutomatic optimizationDeep learning frameworks