W2T: LoRA Weights Already Know What They Can Do

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of inferring the task identity and performance characteristics of a LoRA adapter directly from its weights, without executing the base model or accessing the original training data. To this end, the authors propose MethodFull, a canonical representation based on QR decomposition and singular value decomposition (SVD) that resolves the inherent non-uniqueness in LoRA weight factorization by mapping it to a unique canonical form. This normalized representation is then encoded using a Transformer architecture to accurately model adapter behavior. Evaluated on collections of LoRA adapters across both language and vision domains, the approach significantly improves accuracy in task classification, performance prediction, and adapter retrieval, marking the first end-to-end method capable of semantically interpreting LoRA weights.

Technology Category

Application Category

📝 Abstract
Each LoRA checkpoint compactly stores task-specific updates in low-rank weight matrices, offering an efficient way to adapt large language models to new tasks and domains. In principle, these weights already encode what the adapter does and how well it performs. In this paper, we ask whether this information can be read directly from the weights, without running the base model or accessing training data. A key obstacle is that a single LoRA update can be factorized in infinitely many ways. Without resolving this ambiguity, models trained on the factors may fit the particular factorization rather than the underlying update. To this end, we propose \methodfull, which maps each LoRA update to a provably canonical form via QR decomposition followed by SVD, so that all equivalent factorizations share the same representation. The resulting components are then tokenized and processed by a Transformer to produce a weight-space embedding. Across language and vision LoRA collections, W2T achieves strong results on attribute classification, performance prediction, and adapter retrieval, demonstrating that LoRA weights reliably indicate model behavior once factorization ambiguity is removed. Code is available at https://github.com/xiaolonghan2000/Weight2Token.
Problem

Research questions and friction points this paper is trying to address.

LoRA
weight decomposition ambiguity
model behavior prediction
canonical representation
adapter analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA
canonical representation
QR decomposition
SVD
weight-space embedding
🔎 Similar Papers
No similar papers found.