DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open deep research models predominantly rely on reinforcement learning with verifiable rewards (RLVR), which is effective only for short-answer tasks and fails to scale to open-ended, long-text research requiring multi-step reasoning, evidence attribution, and long-range dependency modeling. Method: We propose Reinforcement Learning with Evolvable Rating criteria (RLER), a framework wherein evaluation criteria co-evolve with the policy model, enabling fine-grained, intra-policy feedback and supporting end-to-end training of open-source long-form deep research models. Integrated with the MCP agent architecture, our model supports multi-step reasoning and evidence provenance. Contribution/Results: We release DR-Tulu-8B, which substantially outperforms prior open models across four long-research benchmarks in scientific, medical, and general domains—matching or exceeding closed-source systems while using fewer parameters and lower inference cost. All data, models, and code are fully open-sourced.

Technology Category

Application Category

📝 Abstract
Deep research models perform multi-step research to produce long-form, well-attributed answers. However, most open deep research models are trained on easily verifiable short-form QA tasks via reinforcement learning with verifiable rewards (RLVR), which does not extend to realistic long-form tasks. We address this with Reinforcement Learning with Evolving Rubrics (RLER), in which we construct and maintain rubrics that co-evolve with the policy model during training; this allows the rubrics to incorporate information that the model has newly explored and to provide discriminative, on-policy feedback. Using RLER, we develop Deep Research Tulu (DR Tulu-8B), the first open model that is directly trained for open-ended, long-form deep research. Across four long-form deep research benchmarks in science, healthcare and general domains, DR Tulu substantially outperforms existing open deep research models, and matches or exceeds proprietary deep research systems, while being significantly smaller and cheaper per query. To facilitate future research, we release all data, models, and code, including our new MCP-based agent infrastructure for deep research systems.
Problem

Research questions and friction points this paper is trying to address.

Addressing limitations of RLVR training for long-form deep research tasks
Developing evolving rubrics that co-adapt with policy models during training
Creating first open model specialized for open-ended long-form research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolving rubrics co-train with policy model
On-policy feedback using newly explored information
MCP-based agent infrastructure for deep research
🔎 Similar Papers
No similar papers found.
Rulin Shao
Rulin Shao
University of Washington
machine learning
Akari Asai
Akari Asai
Allen Institute for AI, Carnegie Mellon University
Natural Language ProcessingMachine LearningInformation Retrieval
Shannon Zejiang Shen
Shannon Zejiang Shen
Massachusetts Institute of Technology
Machine IntelligenceHuman AI Collaboration
Hamish Ivison
Hamish Ivison
University of Washington
Natural Language Processing
Varsha Kishore
Varsha Kishore
Cornell University
Machine Learning
J
Jingming Zhuo
University of Washington
X
Xinran Zhao
Carnegie Mellon University
M
Molly Park
University of Washington
Samuel G. Finlayson
Samuel G. Finlayson
University of Washington
David Sontag
David Sontag
Professor, Massachusetts Institute of Technology
Machine LearningHealthcareArtificial IntelligenceLarge Language ModelsApproximate Inference
T
Tyler Murray
Allen Institute for AI
Sewon Min
Sewon Min
UC Berkeley EECS & Allen Institute for AI
Natural Language ProcessingMachine Learning
Pradeep Dasigi
Pradeep Dasigi
Allen Institute for AI (Ai2)
Natural Language ProcessingMachine LearningLanguage Modeling
Luca Soldaini
Luca Soldaini
Allen Institute for AI
Large Language ModelsOpen Source AIInformation Retrieval
Faeze Brahman
Faeze Brahman
Research Scientist; Allen Institute for AI (Ai2)
Natural Language ProcessingMachine LearningAI AlignmentHuman-Centered AI
W
Wen-tau Yih
University of Washington
T
Tongshuang Wu
Carnegie Mellon University
Luke Zettlemoyer
Luke Zettlemoyer
University of Washington; Meta
Natural Language ProcessingSemanticsMachine LearningArtificial Intelligence
Yoon Kim
Yoon Kim
Associate Professor, MIT
Machine LearningNatural Language ProcessingDeep Learning
Hannaneh Hajishirzi
Hannaneh Hajishirzi
University of Washington; Allen AI
NLPLangauge modelsAI
Pang Wei Koh
Pang Wei Koh
University of Washington; Allen Institute for AI
Machine learningNatural language processingComputational biology