ForgeBench: A Machine Learning Benchmark Suite and Auto-Generation Framework for Next-Generation HLS Tools

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current high-level synthesis (HLS) tools face two critical bottlenecks: (1) a lack of comprehensive benchmarks covering modern machine learning (ML) applications, hindering systematic hardware design evaluation; and (2) a design paradigm confined to monolithic accelerators, lacking architectural abstraction and thus impeding modular reuse and flexible dataflow customization. To address these, we propose the first architecture-oriented HLS benchmark and automated generation framework tailored for ML. Our contribution includes (i) ML-HLS, a large-scale benchmark suite comprising 6,000+ parameterized designs—including a resource-sharing-enhanced subset—and (ii) a novel Python-based domain-specific language (DSL) that automatically maps operator graphs to parameterized hardware templates, supporting multi-granularity resource constraint modeling and seamless integration with mainstream HLS toolchains. This work advances HLS toward modularity, reusability, and dataflow programmability.

Technology Category

Application Category

📝 Abstract
Although High-Level Synthesis (HLS) has attracted considerable interest in hardware design, it has not yet become mainstream due to two primary challenges. First, current HLS hardware design benchmarks are outdated as they do not cover modern machine learning (ML) applications, preventing the rigorous development of HLS tools on ML-focused hardware design. Second, existing HLS tools are outdated because they predominantly target individual accelerator designs and lack an architecture-oriented perspective to support common hardware module extraction and reuse, limiting their adaptability and broader applicability. Motivated by these two limitations, we propose ForgeBench, an ML-focused benchmark suite with a hardware design auto-generation framework for next-generation HLS tools. In addition to the auto-generation framework, we provide two ready-to-use benchmark suites. The first contains over 6,000 representative ML HLS designs. We envision future HLS tools being architecture-oriented, capable of automatically identifying common computational modules across designs, and supporting flexible dataflow and control. Accordingly, the second benchmark suite includes ML HLS designs with possible resource sharing manually implemented to highlight the necessity of architecture-oriented design, ensuring it is future-HLS ready. ForgeBench is open-sourced at https://github.com/hchen799/ForgeBench .
Problem

Research questions and friction points this paper is trying to address.

Outdated HLS benchmarks lack modern ML applications
Existing HLS tools focus on individual accelerators, not architecture-oriented
Need auto-generation framework for next-gen HLS tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Auto-generation framework for HLS tools
6000+ ML HLS benchmark designs
Architecture-oriented future-HLS support
🔎 Similar Papers
No similar papers found.