🤖 AI Summary
To address critical challenges in ultra-long text generation by large language models (LLMs)—including inherent length limitations, quality degradation, and overreliance on low-quality synthetic data—this paper proposes a fully unsupervised reinforcement learning framework. It eliminates the need for human annotations or synthetic data, instead employing a fine-grained reward mechanism that jointly optimizes length control, coherence, structural soundness, and lexical diversity to guide autonomous writing planning. Innovatively adopting an incentivization-based paradigm—replacing conventional supervised fine-tuning (SFT)—the framework enables the spontaneous emergence of ultra-long text generation capability. Evaluated on Qwen2.5-32B using an R1-Zero–inspired training pipeline and a custom reward model, our method achieves state-of-the-art performance on WritingBench and Arena-Write, significantly outperforming SFT baselines and surpassing billion-parameter models including DeepSeek R1 and Qwen3-235B.
📝 Abstract
Ultra-long generation by large language models (LLMs) is a widely demanded scenario, yet it remains a significant challenge due to their maximum generation length limit and overall quality degradation as sequence length increases. Previous approaches, exemplified by LongWriter, typically rely on ''teaching'', which involves supervised fine-tuning (SFT) on synthetic long-form outputs. However, this strategy heavily depends on synthetic SFT data, which is difficult and costly to construct, often lacks coherence and consistency, and tends to be overly artificial and structurally monotonous. In this work, we propose an incentivization-based approach that, starting entirely from scratch and without relying on any annotated or synthetic data, leverages reinforcement learning (RL) to foster the emergence of ultra-long, high-quality text generation capabilities in LLMs. We perform RL training starting from a base model, similar to R1-Zero, guiding it to engage in reasoning that facilitates planning and refinement during the writing process. To support this, we employ specialized reward models that steer the LLM towards improved length control, writing quality, and structural formatting. Experimental evaluations show that our LongWriter-Zero model, trained from Qwen2.5-32B, consistently outperforms traditional SFT methods on long-form writing tasks, achieving state-of-the-art results across all metrics on WritingBench and Arena-Write, and even surpassing 100B+ models such as DeepSeek R1 and Qwen3-235B. We open-source our data and model checkpoints under https://huggingface.co/THU-KEG/LongWriter-Zero-32B