AscendCraft: Automatic Ascend NPU Kernel Generation via DSL-Guided Transcompilation

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in efficient deep learning operator development for Ascend NPUs, which are hindered by their domain-specific programming model, scarce documentation, and the low correctness of directly using large language models (LLMs) to generate AscendC code. To overcome these limitations, we propose a DSL-guided, structured transcompilation approach: first modeling NPU execution semantics in a lightweight domain-specific language (DSL) to generate kernels, then translating them into AscendC through a constraint-driven LLM lowering pipeline. Our method achieves the first high-correctness automatic kernel generation for Ascend NPUs, attaining a 98.1% compilation success rate and 90.4% functional correctness on MultiKernelBench. Notably, 46.2% of the generated kernels match or exceed the performance of PyTorch eager mode, and our approach successfully supports the novel mHC architecture.

Technology Category

Application Category

📝 Abstract
The performance of deep learning models critically depends on efficient kernel implementations, yet developing high-performance kernels for specialized accelerators remains time-consuming and expertise-intensive. While recent work demonstrates that large language models (LLMs) can generate correct and performant GPU kernels, kernel generation for neural processing units (NPUs) remains largely underexplored due to domain-specific programming models, limited public examples, and sparse documentation. Consequently, directly generating AscendC kernels with LLMs yields extremely low correctness, highlighting a substantial gap between GPU and NPU kernel generation. We present AscendCraft, a DSL-guided approach for automatic AscendC kernel generation. AscendCraft introduces a lightweight DSL that abstracts non-essential complexity while explicitly modeling Ascend-specific execution semantics. Kernels are first generated in the DSL using category-specific expert examples and then transcompiled into AscendC through structured, constraint-driven LLM lowering passes. Evaluated on MultiKernelBench across seven operator categories, AscendCraft achieves 98.1% compilation success and 90.4% functional correctness. Moreover, 46.2% of generated kernels match or exceed PyTorch eager execution performance, demonstrating that DSL-guided transcompilation can enable LLMs to generate both correct and competitive NPU kernels. Beyond benchmarks, AscendCraft further demonstrates its generality by successfully generating two correct kernels for newly proposed mHC architecture, achieving performance that substantially surpasses PyTorch eager execution.
Problem

Research questions and friction points this paper is trying to address.

NPU kernel generation
AscendC
large language models
domain-specific programming
automatic code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

DSL-guided transcompilation
Ascend NPU
kernel generation
large language models
AscendC
🔎 Similar Papers
No similar papers found.
Z
Zhongzhen Wen
State Key Lab for Novel Software Technology, Nanjing University
S
Shudi Shao
Software Engineering Application Technology Laboratory, Huawei
Z
Zhong Li
State Key Lab for Novel Software Technology, Nanjing University
Yu Ge
Yu Ge
Chalmers University of Technology
T
Tongtong Xu
Software Engineering Application Technology Laboratory, Huawei
Y
Yuanyi Lin
Software Engineering Application Technology Laboratory, Huawei
Tian Zhang
Tian Zhang
Nanjing University
Model Driven Software Engineering