GRASP LoRA: GRPO Guided Adapter Sparsity Policy for Cross Lingual Transfer

📅 2026-01-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional adapter pruning, which relies on fixed global sparsity ratios and requires extensive grid search over large development sets, resulting in high computational costs and poor generalizability. To overcome this, the authors propose modeling sparsity as a learnable variable and introduce, for the first time, a reinforcement learning controller based on Group Relative Policy Optimization (GRPO) to dynamically optimize pruning rates during training, enabling efficient and adaptive cross-lingual transfer. By integrating LoRA adapter fusion with a lightweight development-set probing technique, the method substantially reduces reliance on large validation sets. Experiments on Arabic and Chinese benchmarks—XL-Sum and MLQA—demonstrate consistent improvements in semantic faithfulness, content coverage, and answer quality, while achieving several-fold reductions in runtime compared to traditional grid search approaches.

Technology Category

Application Category

📝 Abstract
Parameter efficient fine tuning is a way to adapt LLMs to new languages when compute or data are limited, yet adapter pipelines usually choose a global prune ratio by grid search. This practice is computationally expensive and development set intensive, since it repeats training, freezes sparsity, and misses fractional optima. We introduce GRASP LoRA (GRPO Guided Adapter Sparsity Policy), which treats global sparsity as a learnable control variable. A GRPO controller interleaves with training, periodically probing candidate prune ratios on a small micro development set and updating a single global prune ratio online from its reward signal. It operates on merged source and target LoRA adapters on a frozen backbone and replaces grid search with one controller run that learns a prune ratio, followed by a single final merge and prune fine tuning run with pruning fixed to that ratio. On cross lingual transfer from English into Arabic and Chinese, including XL-Sum summarization and MLQA extractive question answering with Llama 3 8B, GRASP LoRA improves semantic faithfulness, content coverage, and answer quality over strong target only and merge and prune baselines. It reduces end to end runtime by multiple times relative to grid search, lowers reliance on large development sets, and makes adapter reuse practical for low resource deployment.
Problem

Research questions and friction points this paper is trying to address.

cross-lingual transfer
parameter-efficient fine-tuning
adapter sparsity
pruning ratio
low-resource deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRASP LoRA
parameter-efficient fine-tuning
cross-lingual transfer
adapter sparsity
GRPO
🔎 Similar Papers
No similar papers found.
B
Besher Hassan
Mohamed bin Zayed University of Artificial Intelligence
Xiuying Chen
Xiuying Chen
MBZUAI
Trustworthy NLPHuman-Centered NLPComputational Social Science