🤖 AI Summary
Large language models (LLMs) incur prohibitive storage and transmission costs, while existing compression techniques—such as low-rank decomposition, pruning, and quantization—are fundamentally limited by linear or local constraints. To address this, we propose Manifold-Constrained Neural Compression (MCNC), the first method to explicitly constrain model parameters to a predefined, frozen low-dimensional nonlinear manifold. MCNC achieves end-to-end differentiable training via nonlinear embedding, parameter reparameterization, and expansion over frozen basis functions. By incorporating strong structural priors, it preserves high representational capacity while overcoming modeling limitations inherent in conventional approaches. Extensive experiments across diverse computer vision and natural language processing tasks demonstrate that MCNC significantly outperforms state-of-the-art methods—including LoRA and QLoRA—in compression ratio (>100×), accuracy (average +1.2% Acc/F1), and reconstruction speed (3.8× faster).
📝 Abstract
The outstanding performance of large foundational models across diverse tasks, from computer vision to speech and natural language processing, has significantly increased their demand. However, storing and transmitting these models poses significant challenges due to their massive size (e.g., 750GB for Llama 3.1 405B). Recent literature has focused on compressing the original weights or reducing the number of parameters required for fine-tuning these models. These compression methods generally constrain the parameter space, for example, through low-rank reparametrization (e.g., LoRA), pruning, or quantization (e.g., QLoRA) during or after the model training. In this paper, we present a novel model compression method, which we term Manifold-Constrained Neural Compression (MCNC). This method constrains the parameter space to low-dimensional pre-defined and frozen nonlinear manifolds, which effectively cover this space. Given the prevalence of good solutions in over-parameterized deep neural networks, we show that by constraining the parameter space to our proposed manifold, we can identify high-quality solutions while achieving unprecedented compression rates across a wide variety of tasks and architectures. Through extensive experiments in computer vision and natural language processing tasks, we demonstrate that our method significantly outperforms state-of-the-art baselines in terms of compression, accuracy, and/or model reconstruction time. Our code is publicly available at https://github.com/mint-vu/MCNC.