🤖 AI Summary
This work addresses the challenges in efficient deep learning operator development for Ascend NPUs, which are hindered by their domain-specific programming model, scarce documentation, and the low correctness of directly using large language models (LLMs) to generate AscendC code. To overcome these limitations, we propose a DSL-guided, structured transcompilation approach: first modeling NPU execution semantics in a lightweight domain-specific language (DSL) to generate kernels, then translating them into AscendC through a constraint-driven LLM lowering pipeline. Our method achieves the first high-correctness automatic kernel generation for Ascend NPUs, attaining a 98.1% compilation success rate and 90.4% functional correctness on MultiKernelBench. Notably, 46.2% of the generated kernels match or exceed the performance of PyTorch eager mode, and our approach successfully supports the novel mHC architecture.
📝 Abstract
The performance of deep learning models critically depends on efficient kernel implementations, yet developing high-performance kernels for specialized accelerators remains time-consuming and expertise-intensive. While recent work demonstrates that large language models (LLMs) can generate correct and performant GPU kernels, kernel generation for neural processing units (NPUs) remains largely underexplored due to domain-specific programming models, limited public examples, and sparse documentation. Consequently, directly generating AscendC kernels with LLMs yields extremely low correctness, highlighting a substantial gap between GPU and NPU kernel generation. We present AscendCraft, a DSL-guided approach for automatic AscendC kernel generation. AscendCraft introduces a lightweight DSL that abstracts non-essential complexity while explicitly modeling Ascend-specific execution semantics. Kernels are first generated in the DSL using category-specific expert examples and then transcompiled into AscendC through structured, constraint-driven LLM lowering passes. Evaluated on MultiKernelBench across seven operator categories, AscendCraft achieves 98.1% compilation success and 90.4% functional correctness. Moreover, 46.2% of generated kernels match or exceed PyTorch eager execution performance, demonstrating that DSL-guided transcompilation can enable LLMs to generate both correct and competitive NPU kernels. Beyond benchmarks, AscendCraft further demonstrates its generality by successfully generating two correct kernels for newly proposed mHC architecture, achieving performance that substantially surpasses PyTorch eager execution.