🤖 AI Summary
This work proposes a parameterized convolutional accelerator architecture based on high-level synthesis (HLS) to address the limitations of conventional CNN accelerators, which often prioritize peak performance at the expense of critical embedded constraints such as latency, power consumption, area, and cost. By leveraging a hardware-software co-design approach, the proposed architecture enables efficient multi-objective optimization across these dimensions, overcoming the rigidity of fixed architectures. Experimental results demonstrate that, compared to non-parameterized designs, the proposed solution not only meets stringent embedded deployment requirements but also offers superior scalability and energy efficiency. Furthermore, the framework exhibits broad applicability and can be readily extended to other deep learning acceleration scenarios.
📝 Abstract
Convolutional neural network (CNN) accelerators implemented on Field-Programmable Gate Arrays (FPGAs) are typically designed with a primary focus on maximizing performance, often measured in giga-operations per second (GOPS). However, real-life embedded deep learning (DL) applications impose multiple constraints related to latency, power consumption, area, and cost. This work presents a hardware-software (HW/SW) co-design methodology in which a CNN accelerator is described using high-level synthesis (HLS) tools that ease the parameterization of the design, facilitating more effective optimizations across multiple design constraints. Our experimental results demonstrate that the proposed design methodology is able to outperform non-parameterized design approaches, and it can be easily extended to other types of DL applications.