TritonRL: Training LLMs to Think and Code Triton Without Cheating

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges in automated Triton kernel generation—including data scarcity, severe reward hacking, and inadequate evaluation metrics—this paper proposes the first end-to-end verifiable training framework. Methodologically, it integrates supervised fine-tuning with reinforcement learning, introducing a Triton-specific hierarchical reward mechanism and a fine-grained, verifiable feedback system to jointly optimize reasoning and code generation; knowledge distillation and a curated dataset further enhance generalization. Evaluated on KernelBench, our approach outperforms all existing models: it achieves 100% functional correctness for generated kernels, delivers a 23% average speedup over baseline implementations, and demonstrates strong robustness and deployability. To our knowledge, this is the first framework enabling fully automated, high-performance synthesis of industrial-grade Triton kernels.

Technology Category

Application Category

📝 Abstract
With the rapid evolution of large language models (LLMs), the demand for automated, high-performance system kernels has emerged as a key enabler for accelerating development and deployment. We introduce TritonRL, a domain-specialized LLM for Triton kernel generation, trained with a novel training framework that enables robust and automated kernel synthesis. Unlike general-purpose programming languages, Triton kernel generation faces unique challenges due to data scarcity and incomplete evaluation criteria, vulnerable to reward hacking. Our approach addresses these challenges end-to-end by distilling Triton-specific knowledge through supervised fine-tuning on curated datasets, and further improving code quality via reinforcement learning (RL) with robust, verifiable rewards and hierarchical reward assignment. Our RL framework robustly detects reward hacking and guides both reasoning traces and code tokens through fine-grained verification and hierarchical reward decomposition, enabling the model to generate high-quality Triton kernels that can truly replace existing modules. With robust and fine-grained evaluation, our experiments on KernelBench demonstrate that TritonRL achieves state-of-the-art correctness and speedup, surpassing all other Triton-specific models and underscoring the effectiveness of our RL-based training paradigm.
Problem

Research questions and friction points this paper is trying to address.

Automating high-performance Triton kernel generation for LLMs
Addressing data scarcity and reward hacking in kernel synthesis
Improving code quality through robust reinforcement learning framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supervised fine-tuning with curated Triton datasets
Reinforcement learning with verifiable hierarchical rewards
Robust reward hacking detection through fine-grained verification
🔎 Similar Papers
No similar papers found.