🤖 AI Summary
In FPGA high-level synthesis (HLS), developers often lack hardware expertise, making it difficult to insert optimization pragmas effectively—especially across kernels, where domain shift severely degrades the generalizability of performance prediction models. To address this, we propose a graph neural network–based hierarchical Mixture-of-Experts (MoE) framework. At the lower level, it concurrently models programs at three granularities—program nodes, basic blocks, and computation graphs; at the higher level, it dynamically aggregates multi-granularity representations. We further introduce a novel two-stage adaptive training strategy to ensure convergence stability. Experimental results demonstrate that our method improves prediction accuracy by 12.7% on cross-kernel generalization tasks, significantly enhancing robustness and transferability. This work provides a scalable, highly generalizable solution for automated pragma optimization in HLS.
📝 Abstract
High-level synthesis (HLS) is a widely used tool in designing Field Programmable Gate Array (FPGA). HLS enables FPGA design with software programming languages by compiling the source code into an FPGA circuit. The source code includes a program (called ``kernel'') and several pragmas that instruct hardware synthesis, such as parallelization, pipeline, etc. While it is relatively easy for software developers to design the program, it heavily relies on hardware knowledge to design the pragmas, posing a big challenge for software developers. Recently, different machine learning algorithms, such as GNNs, have been proposed to automate the pragma design via performance prediction. However, when applying the trained model on new kernels, the significant domain shift often leads to unsatisfactory performance. We propose a more domain-generalizable model structure: a two-level hierarchical Mixture of Experts (MoE), that can be flexibly adapted to any GNN model. Different expert networks can learn to deal with different regions in the representation space, and they can utilize similar patterns between the old kernels and new kernels. In the low-level MoE, we apply MoE on three natural granularities of a program: node, basic block, and graph. The high-level MoE learns to aggregate the three granularities for the final decision. To stably train the hierarchical MoE, we further propose a two-stage training method. Extensive experiments verify the effectiveness of the hierarchical MoE.