Reinforcement Learning Finetunes Small Subnetworks in Large Language Models

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL)-based fine-tuning of large language models (LLMs) is widely assumed to require updating all parameters for effective downstream performance and human value alignment. Method: This work systematically investigates parameter update sparsity in RL fine-tuning across 10 LLM architectures and 7 RL algorithms—including PPO, GRPO, and DPO—using rank analysis, subnet overlap measurement, and KL-divergence constraint ablation. Contribution/Results: We discover that updating only 5–30% of parameters suffices to fully reproduce full-parameter fine-tuning performance and alignment quality. The identified sparse subnetworks exhibit high consistency across algorithms, datasets, and random seeds; span all layers yet retain extreme update sparsity; and achieve near-full-rank parameter updates—indicating preserved representational capacity. This is the first empirical demonstration of intrinsic sparsity in RL-based LLM alignment, challenging the necessity of full-parameter optimization and establishing a new paradigm for efficient, interpretable, and resource-conscious LLM alignment.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) yields substantial improvements in large language models (LLMs) downstream task performance and alignment with human values. Surprisingly, such large gains result from updating only a small subnetwork comprising just 5 percent to 30 percent of the parameters, with the rest effectively unchanged. We refer to this phenomenon as parameter update sparsity induced by RL. It is observed across all 7 widely used RL algorithms (e.g., PPO, GRPO, DPO) and all 10 LLMs from different families in our experiments. This sparsity is intrinsic and occurs without any explicit sparsity promoting regularizations or architectural constraints. Finetuning the subnetwork alone recovers the test accuracy, and, remarkably, produces a model nearly identical to the one obtained via full finetuning. The subnetworks from different random seeds, training data, and even RL algorithms show substantially greater overlap than expected by chance. Our analysis suggests that this sparsity is not due to updating only a subset of layers, instead, nearly all parameter matrices receive similarly sparse updates. Moreover, the updates to almost all parameter matrices are nearly full-rank, suggesting RL updates a small subset of parameters that nevertheless span almost the full subspaces that the parameter matrices can represent. We conjecture that the this update sparsity can be primarily attributed to training on data that is near the policy distribution, techniques that encourage the policy to remain close to the pretrained model, such as the KL regularization and gradient clipping, have limited impact.
Problem

Research questions and friction points this paper is trying to address.

RL finetunes small subnetworks in LLMs for performance gains
Parameter update sparsity occurs without explicit regularization
Sparse updates span full subspaces in parameter matrices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning finetunes small subnetworks in LLMs
Parameter update sparsity occurs without explicit constraints
Subnetwork updates span full subspaces of parameter matrices
🔎 Similar Papers
No similar papers found.