MCNC: Manifold-Constrained Reparameterization for Neural Compression

📅 2024-06-27
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) incur prohibitive storage and transmission costs, while existing compression techniques—such as low-rank decomposition, pruning, and quantization—are fundamentally limited by linear or local constraints. To address this, we propose Manifold-Constrained Neural Compression (MCNC), the first method to explicitly constrain model parameters to a predefined, frozen low-dimensional nonlinear manifold. MCNC achieves end-to-end differentiable training via nonlinear embedding, parameter reparameterization, and expansion over frozen basis functions. By incorporating strong structural priors, it preserves high representational capacity while overcoming modeling limitations inherent in conventional approaches. Extensive experiments across diverse computer vision and natural language processing tasks demonstrate that MCNC significantly outperforms state-of-the-art methods—including LoRA and QLoRA—in compression ratio (>100×), accuracy (average +1.2% Acc/F1), and reconstruction speed (3.8× faster).

Technology Category

Application Category

📝 Abstract
The outstanding performance of large foundational models across diverse tasks, from computer vision to speech and natural language processing, has significantly increased their demand. However, storing and transmitting these models poses significant challenges due to their massive size (e.g., 750GB for Llama 3.1 405B). Recent literature has focused on compressing the original weights or reducing the number of parameters required for fine-tuning these models. These compression methods generally constrain the parameter space, for example, through low-rank reparametrization (e.g., LoRA), pruning, or quantization (e.g., QLoRA) during or after the model training. In this paper, we present a novel model compression method, which we term Manifold-Constrained Neural Compression (MCNC). This method constrains the parameter space to low-dimensional pre-defined and frozen nonlinear manifolds, which effectively cover this space. Given the prevalence of good solutions in over-parameterized deep neural networks, we show that by constraining the parameter space to our proposed manifold, we can identify high-quality solutions while achieving unprecedented compression rates across a wide variety of tasks and architectures. Through extensive experiments in computer vision and natural language processing tasks, we demonstrate that our method significantly outperforms state-of-the-art baselines in terms of compression, accuracy, and/or model reconstruction time. Our code is publicly available at https://github.com/mint-vu/MCNC.
Problem

Research questions and friction points this paper is trying to address.

Compressing large foundational models efficiently
Constraining parameter space with nonlinear manifolds
Outperforming existing compression methods in accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constrains parameters to low-dimensional nonlinear manifolds
Achieves high compression rates with quality solutions
Outperforms baselines in compression and accuracy
🔎 Similar Papers
No similar papers found.
C
Chayne Thrash
Department of Computer Science, Vanderbilt University, Nashville, TN
A
Ali Abbasi
Department of Computer Science, Vanderbilt University, Nashville, TN
P
Parsa Nooralinejad
Department of Computer Science, University of California, Davis, CA
S
Soroush Abbasi Koohpayegani
Department of Computer Science, University of California, Davis, CA
R
Reed Andreas
Department of Computer Science, Vanderbilt University, Nashville, TN
Hamed Pirsiavash
Hamed Pirsiavash
Associate Professor at University of California, Davis
Computer VisionMachine Learning
Soheil Kolouri
Soheil Kolouri
Computer Science, Vanderbilt University, Nashville, TN
Machine LearningOptimal TransportComputer Vision