🤖 AI Summary
To address the challenge of balancing insufficient exploration and pretraining capability degradation in reinforcement learning fine-tuning of large language models (LLMs), this paper proposes a dynamic KL-penalty mechanism based on “critical token” identification. The core innovation lies in the first formal definition and detection of tokens that decisively influence the final output; KL regularization is selectively relaxed only at these positions, enabling synergistic optimization of exploration efficiency and pretraining capability preservation. Built upon the PPO framework, our method integrates token-level importance estimation with adaptive KL-weight modulation. Evaluated on arithmetic reasoning tasks, it achieves a 42% acceleration in policy convergence and an 18.3% improvement in task accuracy, without compromising language modeling performance. This mechanism establishes a novel paradigm for efficient and stable RLHF in LLMs.
📝 Abstract
The ability to achieve long-term goals is a key challenge in the current development of large language models (LLMs). To address this, pre-trained LLMs can be fine-tuned with reinforcement learning (RL) to explore solutions that optimize a given goal. However, exploration with LLMs is difficult, as a balance has to be struck between discovering new solutions and staying close enough to the pre-trained model, so as not to degrade basic capabilities. This is typically controlled with a Kullback-Leibler (KL) penalty. In this paper, we investigate the exploration dynamics of a small language model on a simple arithmetic task. We show how varying degrees of pre-training influence exploration and demonstrate the importance of"critical tokens"which have a dramatic impact on the final outcome. Consequently, we introduce a simple modification to the KL penalty that favors exploration on critical tokens, increasing the efficiency of the RL fine-tuning stage.