๐ค AI Summary
This work addresses the absence of evaluation benchmarks specifically designed for reward models in long-text generation. To this end, the authors introduce the first comprehensive benchmark platform tailored to long-form text, encompassing five core tasks: question answering, retrieval-augmented generation, dialogue, writing, and reasoning. They further propose a โneedle-in-a-haystackโ test to systematically analyze how error location and response length affect reward model performance. Leveraging multi-stage instruction tuning and preference data collection, the study evaluates over twenty representative classification- and generation-based reward models. Results reveal that current models generally struggle to effectively capture long-text semantics, with classification-based approaches demonstrating superior generalization under identical training conditions. This work establishes a fine-grained analytical framework and a reliable benchmark for advancing reward modeling in long-text generation.
๐ Abstract
The widespread adoption of reinforcement learning-based alignment highlights the growing importance of reward models. Various benchmarks have been built to evaluate reward models in various domains and scenarios. However, a significant gap remains in assessing reward models for long-form generation, despite its critical role in real-world applications. To bridge this, we introduce Long-form RewardBench, the first reward modeling testbed specifically designed for long-form generation. Our benchmark encompasses five key subtasks: QA, RAG, Chat, Writing, and Reasoning. We collected instruction and preference data through a meticulously designed multi-stage data collection process, and conducted extensive experiments on 20+ mainstream reward models, including both classifiers and generative models. Our findings reveal that current models still lack long-form reward modeling capabilities. Furthermore, we designed a novel Long-form Needle-in-a-Haystack Test, which revealed a correlation between reward modeling performance and the error's position within a response, as well as the overall response length, with distinct characteristics observed between classification and generative models. Finally, we demonstrate that classifiers exhibit better generalizability compared to generative models trained on the same data. As the first benchmark for long-form reward modeling, this work aims to offer a robust platform for visualizing progress in this crucial area.