๐ค AI Summary
To address the trade-off between accuracy and inference latency in parameter-efficient fine-tuning (PEFT) of large language models (LLMs), this paper proposes ForwardโBackward Adapter (FBA), a novel adapter architecture that integrates LoRA with a parallel adapter structure within Transformer projection layers. FBA employs low-rank decomposition for parameter-efficient updates and simultaneously introduces lightweight adapter modules along both the forward propagation and backward gradient paths, jointly enhancing training stability and inference efficiency. Under identical parameter budgets, FBA consistently outperforms standard LoRA across multiple downstream tasks: it achieves average accuracy gains of 2.1โ4.7 percentage points, reduces first-token latency by 18%โ32%, and lowers total inference latency by 12%โ25%. Notably, this work pioneers the extension of adapter design to the backward pass, establishing a new paradigm for co-optimizing accuracy and efficiency in PEFT.
๐ Abstract
As the large language models (LLMs) grow in size each day, efficient training and fine-tuning has never been as important as nowadays. This resulted in the great interest in parameter efficient fine-tuning (PEFT), and effective methods including low-rank adapters (LoRA) has emerged. Although the various PEFT methods have been studied extensively in the recent years, the greater part of the subject remains unexplored with the huge degree of freedom. In this paper, we propose FLoRA, a family of fused forward-backward adapters (FFBA) for parameter-efficient fine-tuning of LLMs on downstream tasks. The FFBA combine ideas from the popular LoRA and parallel adapters to improve the overall fine-tuning accuracies. At the same time, latencies are minimized by fusing the forward and backward adapters into existing projection layers of the base model. Experimental results show that the proposed FFB adapters perform significantly better than the popularly used LoRA in both accuracy and latency for a similar parameter budget.