DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance limitations of large language models in translating PyTorch code to efficient Triton/CUDA kernels by introducing DRTriton, a framework that leverages large-scale synthetic data and reinforcement learning to automatically generate high-performance Triton kernels, which are then compiled to CUDA at runtime. Key innovations include a CSP-DAG–based synthetic data generation algorithm enabling full operator-space coverage with controllable difficulty sampling, a decoupled-reward curriculum reinforcement learning strategy, and a test-time search mechanism to further accelerate inference. Experimental results demonstrate that DRTriton-7B accelerates 92% of kernels on KernelBench Level 2, substantially outperforming GPT-5.2 (23%) and Claude-Sonnet-4.5 (19%), while exhibiting expert-level generalization in real-world complex scenarios.

Technology Category

Application Category

📝 Abstract
Developing efficient CUDA kernels is a fundamental yet challenging task in the generative AI industry. Recent researches leverage Large Language Models (LLMs) to automatically convert PyTorch reference implementations to CUDA kernels, significantly reducing the engineering efforts. State-of-the-art LLMs, such as GPT-5.2 and Claude-Sonnet-4.5, still struggle in this specific task. To address this challenge, we propose DRTriton, a scalable learning framework for training LLMs to convert PyTorch codes into highly optimized Triton kernels, which are then compiled to CUDA kernels at runtime. DRTriton consists of three key components: (i) a data synthetic algorithm CSP-DAG that guarantees full coverage and unbiased uniform sampling over the operator space with controlled difficulty; (ii) a curriculum reinforcement learning with decoupled reward efficiently optimizes conversion success rate and inference speed simultaneously; and (iii) a test-time search algorithm that further improves the inference speed of the generated Triton kernels. Notably, despite being trained exclusively on synthetic data, DRTriton generalizes effectively to real-world CUDA kernels that are challenging even for human experts. Experimental results show that DRTriton-7B achieves speedup on 92% of the KernelBench Level 2, compared to 23% for GPT-5.2 and 19% for Claude-Sonnet-4.5.
Problem

Research questions and friction points this paper is trying to address.

Triton kernel generation
CUDA optimization
PyTorch to CUDA conversion
large language models
synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic Data Generation
Curriculum Reinforcement Learning
Triton Kernel Optimization
Decoupled Reward
Test-Time Search
🔎 Similar Papers
No similar papers found.