🤖 AI Summary
This work addresses the tendency of reinforcement learning (RL) to reduce output diversity in open-ended tasks such as creative writing, despite its effectiveness in enhancing large language model performance. To mitigate this limitation, the authors propose a framework based on semi-structured long chains of thought that decomposes text generation into explicit planning steps. The approach incorporates a diversified planning-branching strategy and a population-aware diversity reward mechanism, explicitly guiding exploration during RL training. Evaluated on multiple creative writing benchmarks, the method consistently outperforms existing baselines by significantly improving output diversity while maintaining high generation quality, thereby overcoming the conventional RL trade-off that prioritizes performance at the expense of diversity.
📝 Abstract
Reinforcement learning (RL)-based enhancement of large language models (LLMs) often leads to reduced output diversity, undermining their utility in open-ended tasks like creative writing. Current methods lack explicit mechanisms for guiding diverse exploration and instead prioritize optimization efficiency and performance over diversity. This paper proposes an RL framework structured around a semi-structured long Chain-of-Thought (CoT), in which the generation process is decomposed into explicitly planned intermediate steps. We introduce a Diverse Planning Branching method that strategically introduces divergence at the planning phase based on diversity variation, alongside a group-aware diversity reward to encourage distinct trajectories. Experimental results on creative writing benchmarks demonstrate that our approach significantly improves output diversity without compromising generation quality, consistently outperforming existing baselines.