🤖 AI Summary
Sparse tensor programs are critical in deep learning and graph analytics, yet optimizing them on emerging hardware accelerators faces two key challenges: (1) performance is highly sensitive to input sparsity patterns, and (2) early-stage hardware relies on high-overhead cycle-accurate simulators, rendering conventional machine learning cost models ineffective due to their prohibitive data requirements. This paper introduces the first cross-platform transfer learning framework for sparse tensor program optimization. It leverages low-cost CPU-based sampling for pretraining, models feature homogeneity across platforms, employs sparse computation graph representations, and applies few-shot fine-tuning—achieving comparable accuracy with only 5% of target-hardware training data. The framework overcomes bottlenecks in data efficiency and hardware heterogeneity, delivering average speedups of 1.47× and 1.39× on SpMM and SDDMM kernels, respectively, with peak improvements of 5.46× and 4.22×—substantially outperforming state-of-the-art approaches.
📝 Abstract
Sparse tensor programs are essential in deep learning and graph analytics, driving the need for optimized processing. To meet this demand, specialized hardware accelerators are being developed. Optimizing these programs for accelerators is challenging for two reasons: program performance is highly sensitive to variations in sparse inputs, and early-stage accelerators rely on expensive simulators. Therefore, ML-based cost models used for optimizing such programs on general-purpose hardware are often ineffective for early-stage accelerators, as they require large datasets for proper training. To this end, we introduce COGNATE, a novel framework that leverages inexpensive data samples from general-purpose hardware (e.g., CPUs) to train cost models, followed by few-shot fine-tuning on emerging hardware. COGNATE exploits the homogeneity of input features across hardware platforms while effectively mitigating heterogeneity, enabling cost model training with just 5% of the data samples needed by accelerator-specific models to achieve comparable performance. We conduct extensive experiments to demonstrate that COGNATE outperforms existing techniques, achieving average speedups of 1.47x (up to 5.46x) for SpMM and 1.39x (up to 4.22x) for SDDMM.