Adaptive Layerwise Perturbation: Unifying Off-Policy Corrections for LLM RL

๐Ÿ“… 2026-03-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses distributional shift, heavy-tailed importance ratios, and training instability in reinforcement learning with large language modelsโ€”issues stemming from outdated policies and the mismatch between training and inference. To mitigate these challenges, the authors propose an inter-layer adaptive perturbation mechanism that injects learnable, small perturbations into the hidden states of each layer, thereby incorporating the inference policy into the family of update policies and achieving representation-level distribution alignment. By applying layer-wise perturbations, correcting importance ratios, and regulating noise injection, the method effectively suppresses abrupt KL divergence spikes and tail inflation in importance weights, enhancing both exploration and training stability. Experiments demonstrate that full-layer perturbation substantially outperforms partial-layer or logits-only perturbation, yielding significant performance gains in both single-step mathematical reasoning and multi-turn tool-augmented tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Off-policy problems such as policy staleness and training-inference mismatch, has become a major bottleneck for training stability and further exploration for LLM RL. To enhance inference efficiency, the distribution gap between the inference and updated policy grows, leading to heavy-tailed importance ratios. Heavy-tailed ratios arise when the policy is locally sharp, which further inflates sharp gradients and can push updates outside the trust region. To address this, we propose Adaptive Layerwise Perturbation(ALP) by injecting small learnable perturbations into input hidden states of each layer during updates, which is used as the numerator of the importance ratio against the unchanged inference policy in the objective. Intuitively, by adding controlled noise to intermediate representations, ALP prevents the updated policy from deviating too sharply from the inference policy, and enlarges the policy family to cover the inference policy family with mismatch noises. Hence, the flattened distribution can naturally tighten the updated and inference policy gap and reduce the tail of importance ratios, thus maintaining training stability. This is further validated empirically. Experiments on single-turn math and multi-turn tool-integrated reasoning tasks show that ALP not only improves final performance, but also avoid blow up of importance ratio tail and KL spikes during iterative training, along with boosted exploration. Ablations show that representation-level perturbations across all layers are most effective, substantially outperforming partial-layer and logits-only variants.
Problem

Research questions and friction points this paper is trying to address.

off-policy
policy staleness
training-inference mismatch
heavy-tailed importance ratios
LLM RL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Layerwise Perturbation
Off-Policy Correction
Importance Ratio
LLM Reinforcement Learning
Representation Perturbation
๐Ÿ”Ž Similar Papers
No similar papers found.