Long-form RewardBench: Evaluating Reward Models for Long-form Generation

๐Ÿ“… 2026-03-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the absence of evaluation benchmarks specifically designed for reward models in long-text generation. To this end, the authors introduce the first comprehensive benchmark platform tailored to long-form text, encompassing five core tasks: question answering, retrieval-augmented generation, dialogue, writing, and reasoning. They further propose a โ€œneedle-in-a-haystackโ€ test to systematically analyze how error location and response length affect reward model performance. Leveraging multi-stage instruction tuning and preference data collection, the study evaluates over twenty representative classification- and generation-based reward models. Results reveal that current models generally struggle to effectively capture long-text semantics, with classification-based approaches demonstrating superior generalization under identical training conditions. This work establishes a fine-grained analytical framework and a reliable benchmark for advancing reward modeling in long-text generation.

Technology Category

Application Category

๐Ÿ“ Abstract
The widespread adoption of reinforcement learning-based alignment highlights the growing importance of reward models. Various benchmarks have been built to evaluate reward models in various domains and scenarios. However, a significant gap remains in assessing reward models for long-form generation, despite its critical role in real-world applications. To bridge this, we introduce Long-form RewardBench, the first reward modeling testbed specifically designed for long-form generation. Our benchmark encompasses five key subtasks: QA, RAG, Chat, Writing, and Reasoning. We collected instruction and preference data through a meticulously designed multi-stage data collection process, and conducted extensive experiments on 20+ mainstream reward models, including both classifiers and generative models. Our findings reveal that current models still lack long-form reward modeling capabilities. Furthermore, we designed a novel Long-form Needle-in-a-Haystack Test, which revealed a correlation between reward modeling performance and the error's position within a response, as well as the overall response length, with distinct characteristics observed between classification and generative models. Finally, we demonstrate that classifiers exhibit better generalizability compared to generative models trained on the same data. As the first benchmark for long-form reward modeling, this work aims to offer a robust platform for visualizing progress in this crucial area.
Problem

Research questions and friction points this paper is trying to address.

reward models
long-form generation
evaluation benchmark
reinforcement learning
alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Long-form Reward Modeling
Reward Benchmarks
Needle-in-a-Haystack Test
Preference Data
Generative vs. Classifier Reward Models
๐Ÿ”Ž Similar Papers
No similar papers found.
Hui Huang
Hui Huang
Harbin Institute of Technology
Large Language Model
Yancheng He
Yancheng He
Alibaba Group
LLM
Wei Liu
Wei Liu
Associate Professor, Harbin Institute of Technology, Shenzhen
Optical ImagingPhotoacoustic imagingOptical SpectroscopyBiophotonicsCardiac Imaging
M
Muyun Yang
Faculty of Computing, Harbin Institute of Technology, Harbin, China
J
Jiaheng Liu
School of Intelligence Science and Technology, Nanjing University, Suzhou, China
Kehai Chen
Kehai Chen
Harbin Institute of Technolgy (Shenzhen)
LLMNatural Language ProcessingAgentMulti-model Generation
B
Bing Xu
Faculty of Computing, Harbin Institute of Technology, Harbin, China
C
Conghui Zhu
Faculty of Computing, Harbin Institute of Technology, Harbin, China
Hailong Cao
Hailong Cao
Harbin Institute of Technology
T
Tiejun Zhao
Faculty of Computing, Harbin Institute of Technology, Harbin, China