Q3R: Quadratic Reweighted Rank Regularizer for Effective Low-Rank Training

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly enforcing structural constraints and optimization objectives in low-rank pretraining, this paper proposes Quadratic Reweighted Rank Regularization (QRR). The method introduces: (i) a differentiable, smooth log-determinant surrogate for the rank function—replacing the non-differentiable rank operator; and (ii) an iterative reweighting mechanism inspired by Iteratively Reweighted Least Squares (IRLS), which upper-bounds the low-rank constraint via quadratic regularization terms, enabling preset-rank training and seamless integration into Transformer architectures. Evaluated on ViT-Tiny, QRR achieves 60–80% parameter compression with only 1.3–4.0% top-1 accuracy degradation, closely matching full-parameter model performance. Theoretically grounded and empirically efficient, QRR establishes a unified framework for low-rank pretraining that reconciles rigorous rank control with end-to-end differentiability and architectural compatibility.

Technology Category

Application Category

📝 Abstract
Parameter-efficient training, based on low-rank optimization, has become a highly successful tool for fine-tuning large deep-learning models. However, these methods fail at low-rank pre-training tasks where maintaining the low-rank structure and the objective remains a challenging task. We propose the Quadratic Reweighted Rank Regularizer dubbed Q3R, which leads to a novel low-rank inducing training strategy inspired by the iteratively reweighted least squares (IRLS) framework. Q3R is based on a quadratic regularizer term which majorizes a smoothed log determinant serving as rank surrogate objective. Unlike other low-rank training techniques, Q3R is able to train weight matrices with prescribed, low target ranks of models that achieve comparable predictive performance as dense models, with small computational overhead, while remaining fully compatible with existing architectures. For example, we demonstrated one experiment where we are able to truncate $60%$ and $80%$ of the parameters of a ViT-Tiny model with $~1.3%$ and $~4%$ accuracy drop in CIFAR-10 performance respectively. The efficacy of Q3R is confirmed on Transformers across both image and language tasks, including for low-rank fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Maintaining low-rank structure during neural network pre-training
Achieving comparable performance to dense models with fewer parameters
Enabling efficient low-rank training compatible with existing architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quadratic Reweighted Rank Regularizer for low-rank training
Majorizes smoothed log determinant as rank surrogate
Trains weight matrices with prescribed low target ranks
🔎 Similar Papers
No similar papers found.
I
Ipsita Ghosh
Department of Computer Science, University of Central Florida
E
Ethan Nguyen
Department of Computer Science, University of North Carolina at Charlotte
Christian Kümmerle
Christian Kümmerle
Assistant Professor, University of North Carolina at Charlotte
Machine LearningData ScienceNon-Convex OptimizationHigh-Dimensional ProbabilitySignal Processing