🤖 AI Summary
To address the parameter inefficiency of fine-tuning pretrained Transformer models, this paper proposes MetaTT—a unified adapter framework based on global Tensor Train (TT) decomposition. Methodologically, MetaTT introduces a single shared TT structure to model parameter increments across the entire network, enabling low-rank adaptation jointly across layers and modules (e.g., Q/K/V projections and FFNs). It incorporates structured axis indexing—spanning layer, matrix type, attention head, and task—and employs a DMRG-style adaptive rank optimization, ensuring parameter count scales linearly with the number of modes (not exponentially). Empirically, on standard language modeling benchmarks, MetaTT achieves accuracy comparable to LoRA while using significantly fewer parameters, and outperforms CP-based tensor methods. Moreover, it natively supports multi-task adapter sharing without modifying the backbone architecture.
📝 Abstract
We present MetaTT, a unified Tensor Train (TT) adapter framework for global low-rank fine-tuning of pre-trained transformers. Unlike LoRA, which fine-tunes each weight matrix independently, MetaTT uses a single shared TT to factorize all transformer sub-modules -- query, key, value, projection, and feed-forward layers -- by indexing the structural axes like layer and matrix type, and optionally heads and tasks. For a given rank, while LoRA adds parameters proportional to the product across modes, MetaTT only adds parameters proportional to the sum across modes leading to a significantly compressed final adapter. Our benchmarks compare MetaTT with LoRA along with recent state-of-the-art matrix and tensor decomposition based fine-tuning schemes. We observe that when tested on standard language modeling benchmarks, MetaTT leads to the most reduction in the parameters while maintaining similar accuracy to LoRA and even outperforming other tensor-based methods. Unlike CP or other rank-factorizations, the TT ansatz benefits from mature optimization routines -- e.g., DMRG-style rank adaptive minimization in addition to Adam, which we find simplifies training. Because new modes can be appended cheaply, MetaTT naturally extends to shared adapters across many tasks without redesigning the core tensor.