Only relative ranks matter in weight-clustered large language models

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work demonstrates that large language models exhibit substantial parameter redundancy, with performance depending more critically on the relative rank of weights than their absolute values. The authors propose an efficient, retraining-free compression method that quantizes weights into 16–64 shared cluster centroids via K-means clustering, followed by fine-tuning of cluster centers and an affine correction (w′ = aw + b) to recover performance. To theoretically ground this approach, they introduce a rank-preserving perturbation analysis framework, showing that perturbations disrupting weight order cause severe performance degradation, whereas rank-preserving perturbations have minimal impact. Evaluated on Llama-3.1-8B-Instruct and SmolLM2-135M, the method achieves high compression rates while preserving accuracy, with cluster-center fine-tuning recovering 30–40% of the initial performance loss.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) contain billions of parameters, yet many exact values are not essential. We show that what matters most is the relative rank of weights-whether one connection is stronger or weaker than another-rather than precise magnitudes. To reduce the number of unique weight values, we apply weight clustering to pretrained models, replacing every weight matrix with K shared values from K-means. For Llama 3.1-8B-Instruct and SmolLM2-135M, reducing each matrix to only 16-64 distinct values preserves strong accuracy without retraining, providing a simple, training-free method to compress LLMs on disk. Optionally fine-tuning only the cluster means (centroids) recovers 30-40 percent of the remaining accuracy gap at minimal cost. We then systematically randomize cluster means while keeping assignments fixed. Scrambling the relative ranks of the clusters degrades quality sharply-perplexity can increase by orders of magnitude-even when global statistics such as mean and variance are preserved. In contrast, rank-preserving randomizations cause almost no loss at mid and late layers. On the other hand, when many layers are perturbed simultaneously, progressive layer-by-layer replacement reveals that scale drift-not rank distortion-is the dominant collapse mechanism; however, an affine correction w' = aw + b with a > 0 (which preserves both rank order and overall weight distribution) can substantially delay this drift. This rank-based perspective offers a new lens on model compression and robustness.
Problem

Research questions and friction points this paper is trying to address.

weight clustering
relative rank
large language models
model compression
weight quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

weight clustering
relative rank
training-free compression
scale drift
affine correction
🔎 Similar Papers
No similar papers found.
Borja Aizpurua
Borja Aizpurua
Quantum Research Scientist @ Multiverse Computing
cryptographyquantum computingmachine learninganomaly detection
S
Sukhbinder Singh
Multiverse Computing, Toronto, Ontario, Canada
R
Román Orús
Multiverse Computing, San Sebastián, Spain; Donostia International Physics Center, San Sebastián, Spain; Ikerbasque Foundation for Science, Bilbao, Spain