π€ AI Summary
This work addresses the high computational and memory overhead of fully connected layers in deep neural networks, which hinders efficient deployment on resource-constrained RISC-V edge devices. The authors propose an end-to-end design space exploration method based on low-rank decomposition, uniquely integrating Tensor Train (TT) decomposition with RISC-V performance feedback. By pruning ineffective decomposition structures and incorporating RISC-Vβspecific compiler optimizations, the approach significantly enhances inference efficiency. Implemented using the TensorFlow T3F library, the TT-compressed models achieve, on average, 3Γ faster inference than IREE and 8Γ faster than Pluto on RISC-V platforms, offering a highly efficient solution for deploying fully connected layers in edge AI applications.
π Abstract
Deep neural networks (DNNs) have become indispensable in many real-life applications like natural language processing, and autonomous systems. However, deploying DNNs on resource-constrained devices, e.g., in RISC-V platforms, remains challenging due to the high computational and memory demands of fully connected (FC) layers, which dominate resource consumption. Low-rank factorization (LRF) offers an effective approach to compressing FC layers, but the vast design space of LRF solutions involves complex tradeoffs among FLOPs, memory size, inference time, and accuracy, making the LRF process complex and time-consuming. This article introduces an end-to-end LRF design space exploration methodology and a specialized design tool for optimizing FC layers on RISC-V processors. Using Tensor Train Decomposition (TTD) offered by TensorFlow T3F library, the proposed work prunes the LRF design space by excluding first, inefficient decomposition shapes and second, solutions with poor inference performance on RISC-V architectures. Compiler optimizations are then applied to enhance custom T3F layer performance, minimizing inference time and boosting computational efficiency. On average, our TT-decomposed layers run 3Γ faster than IREE and 8Γ faster than Pluto on the same compressed model. This work provides an efficient solution for deploying DNNs on edge and embedded devices powered by RISC-V architectures.