ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the longstanding trade-off between parameter efficiency and representational capacity in large language model (LLM) adaptation, this paper proposes Hadamard Low-Rank Adaptation (HLora), a fully decoupled, parameter-efficient fine-tuning method that operates independently of pretrained weights. HLora reconstructs incremental updates via the Hadamard product of two learnable low-rank matrices—introducing, for the first time, a weight-agnostic and inherently nonlinear low-rank update structure, thereby overcoming the expressivity limitations of linear low-rank methods such as LoRA. Theoretical analysis and matrix reconstruction experiments demonstrate HLora’s superior representational power. Empirically, HLora achieves state-of-the-art performance across diverse multitask benchmarks—including arithmetic and commonsense reasoning—outperforming leading parameter-efficient fine-tuning (PEFT) approaches like LoRA and HiRA. Moreover, it consistently delivers robust gains across multiple large-scale models (e.g., Llama, Qwen), confirming its generalizability and stability.

Technology Category

Application Category

📝 Abstract
Large Language Models have demonstrated strong performance across a wide range of tasks, but adapting them efficiently to new domains remains a key challenge. Parameter-Efficient Fine-Tuning (PEFT) methods address this by introducing lightweight, trainable modules while keeping most pre-trained weights fixed. The prevailing approach, LoRA, models updates using a low-rank decomposition, but its expressivity is inherently constrained by the rank. Recent methods like HiRA aim to increase expressivity by incorporating a Hadamard product with the frozen weights, but still rely on the structure of the pre-trained model. We introduce ABBA, a new PEFT architecture that reparameterizes the update as a Hadamard product of two independently learnable low-rank matrices. In contrast to prior work, ABBA fully decouples the update from the pre-trained weights, enabling both components to be optimized freely. This leads to significantly higher expressivity under the same parameter budget. We formally analyze ABBA's expressive capacity and validate its advantages through matrix reconstruction experiments. Empirically, ABBA achieves state-of-the-art results on arithmetic and commonsense reasoning benchmarks, consistently outperforming existing PEFT methods by a significant margin across multiple models. Our code is publicly available at: https://github.com/CERT-Lab/abba.
Problem

Research questions and friction points this paper is trying to address.

Efficiently adapting large language models to new domains
Overcoming expressivity limits in parameter-efficient fine-tuning methods
Decoupling updates from pre-trained weights for higher flexibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

ABBA uses Hadamard product of low-rank matrices
Decouples updates from pre-trained weights
Enhances expressivity under same parameter budget
🔎 Similar Papers
No similar papers found.