Optimizing Tensor Train Decomposition in DNNs for RISC-V Architectures Using Design Space Exploration and Compiler Optimizations

πŸ“… 2025-09-18
πŸ›οΈ ACM Transactions on Embedded Computing Systems
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high computational and memory overhead of fully connected layers in deep neural networks, which hinders efficient deployment on resource-constrained RISC-V edge devices. The authors propose an end-to-end design space exploration method based on low-rank decomposition, uniquely integrating Tensor Train (TT) decomposition with RISC-V performance feedback. By pruning ineffective decomposition structures and incorporating RISC-V–specific compiler optimizations, the approach significantly enhances inference efficiency. Implemented using the TensorFlow T3F library, the TT-compressed models achieve, on average, 3Γ— faster inference than IREE and 8Γ— faster than Pluto on RISC-V platforms, offering a highly efficient solution for deploying fully connected layers in edge AI applications.

Technology Category

Application Category

πŸ“ Abstract
Deep neural networks (DNNs) have become indispensable in many real-life applications like natural language processing, and autonomous systems. However, deploying DNNs on resource-constrained devices, e.g., in RISC-V platforms, remains challenging due to the high computational and memory demands of fully connected (FC) layers, which dominate resource consumption. Low-rank factorization (LRF) offers an effective approach to compressing FC layers, but the vast design space of LRF solutions involves complex tradeoffs among FLOPs, memory size, inference time, and accuracy, making the LRF process complex and time-consuming. This article introduces an end-to-end LRF design space exploration methodology and a specialized design tool for optimizing FC layers on RISC-V processors. Using Tensor Train Decomposition (TTD) offered by TensorFlow T3F library, the proposed work prunes the LRF design space by excluding first, inefficient decomposition shapes and second, solutions with poor inference performance on RISC-V architectures. Compiler optimizations are then applied to enhance custom T3F layer performance, minimizing inference time and boosting computational efficiency. On average, our TT-decomposed layers run 3Γ— faster than IREE and 8Γ— faster than Pluto on the same compressed model. This work provides an efficient solution for deploying DNNs on edge and embedded devices powered by RISC-V architectures.
Problem

Research questions and friction points this paper is trying to address.

Tensor Train Decomposition
RISC-V
Deep Neural Networks
Low-Rank Factorization
Design Space Exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tensor Train Decomposition
Design Space Exploration
Compiler Optimization
RISC-V
Low-Rank Factorization
πŸ”Ž Similar Papers
No similar papers found.
T
Theologos Anthimopoulos
School of Informatics, Aristotle University of Thessaloniki, Greece
M
M. Kokhazadeh
School of Informatics, Aristotle University of Thessaloniki, Greece
V
Vasilios I. Kelefouras
School of Engineering, Computing and Mathematics, University of Plymouth, United Kingdom
B
Benjamin Himpel
School of Informatics, Reutlingen University, Germany
Georgios Keramidas
Georgios Keramidas
Assistant Professor, Aristotle University of Thessaloniki, Greece
Computer ArchitectureLow-Power Graphics/AI processorsEdgeAICompilersFault-Tolerance