🤖 AI Summary
Reinforcement learning (RL)-based fine-tuning of large language models (LLMs) is widely assumed to require updating all parameters for effective downstream performance and human value alignment.
Method: This work systematically investigates parameter update sparsity in RL fine-tuning across 10 LLM architectures and 7 RL algorithms—including PPO, GRPO, and DPO—using rank analysis, subnet overlap measurement, and KL-divergence constraint ablation.
Contribution/Results: We discover that updating only 5–30% of parameters suffices to fully reproduce full-parameter fine-tuning performance and alignment quality. The identified sparse subnetworks exhibit high consistency across algorithms, datasets, and random seeds; span all layers yet retain extreme update sparsity; and achieve near-full-rank parameter updates—indicating preserved representational capacity. This is the first empirical demonstration of intrinsic sparsity in RL-based LLM alignment, challenging the necessity of full-parameter optimization and establishing a new paradigm for efficient, interpretable, and resource-conscious LLM alignment.
📝 Abstract
Reinforcement learning (RL) yields substantial improvements in large language models (LLMs) downstream task performance and alignment with human values. Surprisingly, such large gains result from updating only a small subnetwork comprising just 5 percent to 30 percent of the parameters, with the rest effectively unchanged. We refer to this phenomenon as parameter update sparsity induced by RL. It is observed across all 7 widely used RL algorithms (e.g., PPO, GRPO, DPO) and all 10 LLMs from different families in our experiments. This sparsity is intrinsic and occurs without any explicit sparsity promoting regularizations or architectural constraints. Finetuning the subnetwork alone recovers the test accuracy, and, remarkably, produces a model nearly identical to the one obtained via full finetuning. The subnetworks from different random seeds, training data, and even RL algorithms show substantially greater overlap than expected by chance. Our analysis suggests that this sparsity is not due to updating only a subset of layers, instead, nearly all parameter matrices receive similarly sparse updates. Moreover, the updates to almost all parameter matrices are nearly full-rank, suggesting RL updates a small subset of parameters that nevertheless span almost the full subspaces that the parameter matrices can represent. We conjecture that the this update sparsity can be primarily attributed to training on data that is near the policy distribution, techniques that encourage the policy to remain close to the pretrained model, such as the KL regularization and gradient clipping, have limited impact.