Faster Parameter-Efficient Tuning with Token Redundancy Reduction

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low training/inference efficiency and deployment constraints of parameter-efficient tuning (PET) methods—stemming from inherited large-model inference latency and additional computational overhead—this paper proposes a plug-and-play differentiable token redundancy reduction mechanism. Our method introduces adapter-driven token similarity modeling tailored for PET, coupled with fully differentiable token merging and the straight-through estimator (STE), enabling dynamic, accuracy-preserving token pruning. Crucially, it maintains ultra-low parameter overhead (<0.5% of the base model) while achieving up to 2–3× lower inference latency, 30–50% reduced GPU memory consumption, and accelerated training. The approach matches state-of-the-art PET performance without modifying the model architecture or retraining the backbone.

Technology Category

Application Category

📝 Abstract
Parameter-efficient tuning (PET) aims to transfer pre-trained foundation models to downstream tasks by learning a small number of parameters. Compared to traditional fine-tuning, which updates the entire model, PET significantly reduces storage and transfer costs for each task regardless of exponentially increasing pre-trained model capacity. However, most PET methods inherit the inference latency of their large backbone models and often introduce additional computational overhead due to additional modules (e.g. adapters), limiting their practicality for compute-intensive applications. In this paper, we propose Faster Parameter-Efficient Tuning (FPET), a novel approach that enhances inference speed and training efficiency while maintaining high storage efficiency. Specifically, we introduce a plug-and-play token redundancy reduction module delicately designed for PET. This module refines tokens from the self-attention layer using an adapter to learn the accurate similarity between tokens and cuts off the tokens through a fully-differentiable token merging strategy, which uses a straight-through estimator for optimal token reduction. Experimental results prove that our FPET achieves faster inference and higher memory efficiency than the pre-trained backbone while keeping competitive performance on par with state-of-the-art PET methods.
Problem

Research questions and friction points this paper is trying to address.

Reduces inference latency in parameter-efficient tuning methods
Minimizes computational overhead from additional PET modules
Maintains storage efficiency while improving token processing speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token redundancy reduction for faster inference
Plug-and-play module for PET efficiency
Differentiable token merging strategy optimization
🔎 Similar Papers
No similar papers found.