π€ AI Summary
To address the growing mismatch between GPUβs bulk-synchronous execution model and the heterogeneous, increasingly large-scale computational graphs of modern deep learning models, this paper proposes a lightweight architectural enhancement: introducing a dataflow-driven, operator-level concurrent execution mechanism atop existing GPUs. Our approach synergistically integrates microarchitectural extension primitives, a PyTorch Dynamoβbased custom compiler, and a dataflow graph scheduling runtime. Crucially, it requires no hardware redesign, enabling dynamic SM resource scheduling, on-chip data movement optimization, and exploitation of implicit parallelism dimensions. Evaluated across five representative deep learning workloads, our method achieves 1.3Γβ2.3Γ inference speedup (reducing off-chip memory traffic by 41%β98%) and 1.1Γβ2.4Γ training speedup (reducing off-chip memory traffic by 16%β42%). To our knowledge, this is the first work to efficiently unlock underutilized compute resources and overcome the limitations of vertical kernel fusion on commodity GPUs.
π Abstract
State of art DL models are growing in size and complexity, with many modern models also increasing in heterogeneity of behavior. GPUs are still the dominant platform for DL applications, relying on a bulk-synchronous execution model which has many drawbacks and is ill-suited for the graph structure of DL applications. Many industry and academic works attempt to overcome these by employing vertical fusion but this approach still fails to realize three untapped opportunities: (1) the fact that many resources on the GPU are idle while only one operator executes due to temporal multiplexing of the SM; (2) lower energy from more intelligent on-chip data-movement which lends to higher performance in a power-provisioned environment. (3) inability to exploit hidden or reduction dimensions as a source of parallelism to ease pressure on batch size. This paper explores relatively uncharted territory, answering the following key question: Can modest adjustments to the current GPU architecture enable efficient dataflow execution, thereby circumventing the constraints of vertical fusion without necessitating a clean-slate architecture design. We develop Kitsune -- a set of primitives that enable dataflow execution on GPUs and an end-to-end compiler based on PyTorch Dynamo. Across 5 challenge applications, Kitsune can provide 1.3$ imes$-2.3$ imes$ and 1.1$ imes$-2.4$ imes$ performance improvement as well as 41%-98% and 16%-42% off-chip traffic reduction for inference and training, respectively.