A Parameterizable Convolution Accelerator for Embedded Deep Learning Applications

📅 2025-07-06
🏛️ IEEE Computer Society Annual Symposium on VLSI
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a parameterized convolutional accelerator architecture based on high-level synthesis (HLS) to address the limitations of conventional CNN accelerators, which often prioritize peak performance at the expense of critical embedded constraints such as latency, power consumption, area, and cost. By leveraging a hardware-software co-design approach, the proposed architecture enables efficient multi-objective optimization across these dimensions, overcoming the rigidity of fixed architectures. Experimental results demonstrate that, compared to non-parameterized designs, the proposed solution not only meets stringent embedded deployment requirements but also offers superior scalability and energy efficiency. Furthermore, the framework exhibits broad applicability and can be readily extended to other deep learning acceleration scenarios.

Technology Category

Application Category

📝 Abstract
Convolutional neural network (CNN) accelerators implemented on Field-Programmable Gate Arrays (FPGAs) are typically designed with a primary focus on maximizing performance, often measured in giga-operations per second (GOPS). However, real-life embedded deep learning (DL) applications impose multiple constraints related to latency, power consumption, area, and cost. This work presents a hardware-software (HW/SW) co-design methodology in which a CNN accelerator is described using high-level synthesis (HLS) tools that ease the parameterization of the design, facilitating more effective optimizations across multiple design constraints. Our experimental results demonstrate that the proposed design methodology is able to outperform non-parameterized design approaches, and it can be easily extended to other types of DL applications.
Problem

Research questions and friction points this paper is trying to address.

CNN accelerator
embedded deep learning
FPGA
design constraints
parameterization
Innovation

Methods, ideas, or system contributions that make the work stand out.

parameterizable accelerator
HW/SW co-design
high-level synthesis (HLS)
embedded deep learning
multi-constraint optimization
🔎 Similar Papers
No similar papers found.