Unified Framework for Neural Network Compression via Decomposition and Optimal Rank Selection

📅 2024-09-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in deploying tensor decomposition-based compression on edge devices—namely, difficulty in optimal rank selection, reliance on training data, and non-differentiable optimization—this paper proposes a unified framework for joint decomposition and optimal rank optimization. Our method introduces: (1) a continuous-space automatic rank search mechanism that enables data-free, efficient, and differentiable rank configuration; (2) a composite compression loss function that supports end-to-end optimization under strict rank constraints; and (3) a synergistic integration of CP and Tucker decompositions with differentiable fine-tuning strategies. Evaluated across multiple benchmark datasets, our approach achieves up to 5.2× model compression and 1.8× inference speedup, with accuracy degradation below 0.3%, significantly outperforming conventional tensor decomposition methods.

Technology Category

Application Category

📝 Abstract
Despite their high accuracy, complex neural networks demand significant computational resources, posing challenges for deployment on resource-constrained devices such as mobile phones and embedded systems. Compression algorithms have been developed to address these challenges by reducing model size and computational demands while maintaining accuracy. Among these approaches, factorization methods based on tensor decomposition are theoretically sound and effective. However, they face difficulties in selecting the appropriate rank for decomposition. This paper tackles this issue by presenting a unified framework that simultaneously applies decomposition and optimal rank selection, employing a composite compression loss within defined rank constraints. Our approach includes an automatic rank search in a continuous space, efficiently identifying optimal rank configurations without the use of training data, making it computationally efficient. Combined with a subsequent fine-tuning step, our approach maintains the performance of highly compressed models on par with their original counterparts. Using various benchmark datasets, we demonstrate the efficacy of our method through a comprehensive analysis.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal rank for neural network tensor decomposition compression
Reducing computational overhead in rank search without additional training data
Maintaining model accuracy while achieving high compression rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework combining decomposition with optimized rank selection
Automatic rank search in continuous space without training data
Composite compression loss maintains performance after fine-tuning
🔎 Similar Papers
No similar papers found.
A
Ali Aghababaei Harandi
Université Grenoble Alpes, Computer Science Laboratory, Grenoble, France
Massih-Reza Amini
Massih-Reza Amini
Professor, University Grenoble Alpes
Artificial IntelligenceMachine LearningLearning TheoryInformation Retrieval