🤖 AI Summary
Existing DNN acceleration solutions for resource-constrained, OS-less IoT endpoints struggle to balance lightweight design with architectural generality. Method: This paper proposes MARVEL, an end-to-end framework that automatically generates customized RISC-V instruction-set extensions tailored to *classes* of DNN models—not individual models—leveraging Apache TVM for model optimization and code generation, Synopsys ASIP Designer for computational kernel extraction and RISC-V core customization, and Xilinx Vivado for FPGA deployment. The approach eliminates runtime dependencies on frameworks like TensorFlow or PyTorch, enabling minimal bare-metal execution. Results: Evaluated on the Xilinx ZCU104 platform, MARVEL achieves up to 2× inference speedup and nearly 2× energy reduction over baseline implementations, with only a 28.23% hardware area overhead. It maintains compatibility across diverse models including LeNet-5*, MobileNet, and ResNet.
📝 Abstract
Deploying deep neural networks (DNNs) on resource-constrained IoT devices remains a challenging problem, often requiring hardware modifications tailored to individual AI models. Existing accelerator-generation tools, such as AMD's FINN, do not adequately address extreme resource limitations faced by IoT endpoints operating in bare-metal environments without an operating system (OS). To overcome these constraints, we propose MARVEL-an automated, end-to-end framework that generates custom RISC-V ISA extensions tailored to specific DNN model classes, with a primary focus on convolutional neural networks (CNNs). The proposed method profiles high-level DNN representations in Python and generates an ISA-extended RISC-V core with associated compiler tools for efficient deployment. The flow leverages (1) Apache TVM for translating high-level Python-based DNN models into optimized C code, (2) Synopsys ASIP Designer for identifying compute-intensive kernels, modeling, and generating a custom RISC-V and (3) Xilinx Vivado for FPGA implementation. Beyond a model class specific RISC-V, our approach produces an optimized bare-metal C implementation, eliminating the need for an OS or extensive software dependencies. Unlike conventional deployment pipelines relying on TensorFlow/PyTorch runtimes, our solution enables seamless execution in highly resource-constrained environments. We evaluated the flow on popular DNN models such as LeNet-5*, MobileNetV1, ResNet50, VGG16, MobileNetV2 and DenseNet121 using the Synopsys trv32p3 RISC-V core as a baseline. Results show a 2x speedup in inference and upto 2x reduction in energy per inference at a 28.23% area overhead when implemented on an AMD Zynq UltraScale+ ZCU104 FPGA platform.