🤖 AI Summary
Optimizing pragma directives in high-level synthesis (HLS) is challenging due to the vast design space, highly nonlinear and interactive performance responses. To address this, we propose a hardware design comparison framework based on relative preference learning. Our method represents source code as a graph and integrates a graph neural network (GNN) with a node-difference attention module, jointly optimizing pairwise preference ranking loss and pointwise performance prediction loss. We further introduce a two-stage exploration strategy—pointwise pruning followed by pairwise validation—to efficiently navigate the search space. Compared to the state-of-the-art HARP, our approach achieves significant improvements across multiple ranking metrics and generates accelerators with superior timing closure, reduced area, and higher throughput. Notably, it is the first method to unify pragma criticality identification with nonlinear interaction modeling within a single optimization framework.
📝 Abstract
High-level synthesis (HLS) is an automated design process that transforms high-level code into optimized hardware designs, enabling rapid development of efficient hardware accelerators for various applications such as image processing, machine learning, and signal processing. To achieve optimal performance, HLS tools rely on pragmas, which are directives inserted into the source code to guide the synthesis process, and these pragmas can have various settings and values that significantly impact the resulting hardware design. State-of-the-art ML-based HLS methods, such as HARP, first train a deep learning model, typically based on graph neural networks (GNNs) applied to graph-based representations of the source code and its pragmas. They then perform design space exploration (DSE) to explore the pragma design space, rank candidate designs using the trained model, and return the top designs as the final designs. However, traditional DSE methods face challenges due to the highly nonlinear relationship between pragma settings and performance metrics, along with complex interactions between pragmas that affect performance in non-obvious ways. To address these challenges, we propose compareXplore, a novel approach that learns to compare hardware designs for effective HLS optimization. COMPAREXPLORE introduces a hybrid loss function that combines pairwise preference learning with pointwise performance prediction, enabling the model to capture both relative preferences and absolute performance values. Moreover, we introduce a novel Node Difference Attention module that focuses on the most informative differences between designs, enhancing the model’s ability to identify critical pragmas impacting performance. compareXPLORE adopts a two-stage DSE approach, where a pointwise prediction model is used for the initial design pruning, followed by a pairwise comparison stage for precise performance verification. Experimental results demonstrate that comPAREXPLORE achieves significant improvements in ranking metrics and generates high-quality HLS results for the selected designs, outperforming the existing state-of-the-art method. CCS CONCEPTS • Hardware $
ightarrow$ High-level and register-transfer level synthesis; • Computing methodologies $
ightarrow$ Neural networks.