π€ AI Summary
This work addresses the inefficiency and lack of interpretability in existing large language modelβbased GPU kernel optimization methods, which rely on implicit heuristics. To overcome these limitations, the authors propose KernelSkill, a knowledge-driven, trajectory-aware multi-agent optimization framework featuring a novel dual-level memory architecture. The long-term memory stores reusable expert optimization skills, while the short-term memory prevents redundant search efforts, thereby transforming implicit heuristics into explicit, structured optimization knowledge. Experimental results demonstrate that KernelSkill achieves 100% optimization success across KernelBench Levels 1β3, delivering average speedups of 5.44Γ, 2.82Γ, and 1.92Γ over Torch Eager, significantly outperforming current baseline approaches.
π Abstract
Improving GPU kernel efficiency is crucial for advancing AI systems. Recent work has explored leveraging large language models (LLMs) for GPU kernel generation and optimization. However, existing LLM-based kernel optimization pipelines typically rely on opaque, implicitly learned heuristics within the LLMs to determine optimization strategies. This leads to inefficient trial-and-error and weakly interpretable optimizations. Our key insight is to replace implicit heuristics with expert optimization skills that are knowledge-driven and aware of task trajectories. Specifically, we present KernelSkill, a multi-agent framework with a dual-level memory architecture. KernelSkill operates by coordinating agents with long-term memory of reusable expert skills and short-term memory to prevent repetitive backtracking. On KernelBench Levels 1-3, KernelSkill achieves a 100% success rate and average speedups of 5.44x, 2.82x, and 1.92x over Torch Eager on Levels 1, 2, and 3, respectively, outperforming prior baselines. Code is available at https://github.com/0satan0/KernelMem/.