Flashlight: PyTorch Compiler Extensions to Accelerate Attention Variants

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM attention variants suffer from inefficient implementations that rely either on hand-optimized kernels or static templates (e.g., FlexAttention), limiting support for data-dependent and highly generalizable attention patterns. Method: We propose the first compiler-native PyTorch attention acceleration framework—eliminating the need for predefined templates or domain-specific kernels—by performing computation graph rewriting, automatic operator fusion, and tiling-based optimization to transparently generate high-performance kernels at compile time. Contribution/Results: Our approach overcomes FlexAttention’s restriction to static structures, enabling unified support for arbitrary data-dependent attention forms expressible in PyTorch. Experiments show that our generated kernels match or exceed FlexAttention’s performance while fully preserving PyTorch’s native programming flexibility. This significantly reduces development and deployment overhead for novel attention mechanisms.

Technology Category

Application Category

📝 Abstract
Attention is a fundamental building block of large language models (LLMs), so there have been many efforts to implement it efficiently. For example, FlashAttention leverages tiling and kernel fusion to optimize attention. Recently, a number of variants of attention have been introduced to enhance model quality or efficiency. Supporting them efficiently remains difficult since they usually require specialized kernels or hand-tuned implementations. FlexAttention recently addressed part of this gap by using static programming templates to support FlashAttention-like kernels for a subset of attention variants. In this paper, we introduce Flashlight, a compiler-native framework within the PyTorch ecosystem that automatically generates fused, FlashAttention-style kernels for arbitrary attention-based programs, without relying on static templates or predefined kernel specializations. Flashlight leverages PyTorch's compilation workflow to fuse and tile attention computations transparently, enabling efficient execution for diverse attention patterns. Not only does it support all variants expressible in the FlexAttention model but it also handles more general, data-dependent attention formulations that are beyond the capabilities of FlexAttention. Our results show that Flashlight produces kernels with competitive or superior performance to FlexAttention, while offering the flexibility of native PyTorch code, enabling developers to rapidly explore new attention models without sacrificing performance.
Problem

Research questions and friction points this paper is trying to address.

Automatically generates optimized attention kernels for arbitrary attention variants
Supports data-dependent attention patterns beyond static template limitations
Enables rapid exploration of new attention models without performance sacrifice
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compiler-native framework generates fused attention kernels automatically
Leverages PyTorch compilation to tile attention computations transparently
Supports data-dependent attention patterns beyond static template limitations
B
Bozhi You
Department of Computer Science, University of Texas at Austin, Austin, USA
Irene Wang
Irene Wang
PhD Student, Georgia Institute of Technology
Systems for Machine LearningComputer ArchitectureSystem Infrastructure Design
Z
Z. Mustafaoglu
Department of Computer Science, University of Texas at Austin, Austin, USA
Abhinav Jangda
Abhinav Jangda
Microsoft Research
High Performance ComputingProgramming LanguagesSystems
A
Angélica Moreira
Microsoft Research, Redmond, USA
Roshan Dathathri
Roshan Dathathri
Senior Researcher, Microsoft Research
Programming languagescompilerssystemsparallel computinggraph processing
Divya Mahajan
Divya Mahajan
Georgia Tech
Computer ArchitectureSystem Infrastructure DesignMachine LearningDomain Specfic Accelerator
K
K. Pingali
Department of Computer Science, University of Texas at Austin, Austin, USA