TÜLU 3: Pushing Frontiers in Open Language Model Post-Training

📅 2024-11-22
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
To address the critical gaps in open-language-model post-training—namely, lagging performance behind proprietary counterparts, insufficient transparency, and poor reproducibility—this paper introduces Tulu 3, the first fully open-source, state-of-the-art post-trained model family built on Llama 3.1. Methodologically, it proposes (1) Reinforcement Learning with Verifiable Rewards (RLVR), a novel framework enhancing robustness to reward misalignment; (2) a multi-stage, contamination-free standardized evaluation protocol, incorporating development-set and zero-shot held-out evaluations alongside failure analysis; and (3) fully open-sourced infrastructure—including data cleaning utilities, SFT/DPO/RLVR training code, and a unified evaluation suite. Empirical results demonstrate that Tulu 3 consistently outperforms leading open models—including Llama 3.1-Instruct, Qwen 2.5, Mistral, and GPT-4o-mini—across diverse benchmarks, establishing new open-weight SOTA performance.

Technology Category

Application Category

📝 Abstract
Language model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most important pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce Tulu 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques. Tulu 3, which builds on Llama 3.1 base models, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, Mistral, and even closed models such as GPT-4o-mini and Claude 3.5-Haiku. The training algorithms for our models include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). With Tulu 3, we introduce a multi-task evaluation scheme for post-training recipes with development and unseen evaluations, standard benchmark implementations, and substantial decontamination of existing open datasets on said benchmarks. We conclude with analysis and discussion of training methods that did not reliably improve performance. In addition to the Tulu 3 model weights and demo, we release the complete recipe -- including datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed report for reproducing and further adapting the Tulu 3 approach to more domains.
Problem

Research questions and friction points this paper is trying to address.

Open Language Models
Post-training Methods
Transparency in AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Comprehensive Training Recipes
Open Approach
🔎 Similar Papers
No similar papers found.
Nathan Lambert
Nathan Lambert
Research Scientist, Allen AI
Reinforcement LearningMachine LearningRoboticsResponsible AI
J
Jacob Daniel Morrison
Allen Institute for AI
Valentina Pyatkin
Valentina Pyatkin
Allen Institute for AI & University of Washington
NLPGenerative AILanguage ModelingResponsible AIML
Shengyi Huang
Shengyi Huang
Allen Institute for Artificial Intelligence
Artificial IntelligenceReinforcement Learning
Hamish Ivison
Hamish Ivison
University of Washington
Natural Language Processing
Faeze Brahman
Faeze Brahman
Research Scientist; Allen Institute for AI (Ai2)
Natural Language ProcessingMachine LearningAI AlignmentHuman-Centered AI
Lester James V. Miranda
Lester James V. Miranda
University of Cambridge
Natural Language ProcessingMachine Learning
Alisa Liu
Alisa Liu
University of Washington
natural language processingartificial intelligence
Nouha Dziri
Nouha Dziri
Allen Institute for AI (Ai2)
Artificial IntelligenceNatural Language Processing
S
Shane Lyu
Y
Yuling Gu
Saumya Malik
Saumya Malik
Allen Institute for AI
Victoria Graf
Victoria Graf
University of Washington
Natural Language Processing
Jena D. Hwang
Jena D. Hwang
Allen Institute for AI
natural language processingcomputational linguisticscommonsense reasoninglexical semantics
J
Jiangjiang Yang
R
R. L. Bras
O
Oyvind Tafjord
C
Chris Wilhelm
Luca Soldaini
Luca Soldaini
Allen Institute for AI
Large Language ModelsOpen Source AIInformation Retrieval
Noah A. Smith
Noah A. Smith
University of Washington; Allen Institute for Artificial Intelligence
natural language processingmachine learningcomputational social sciencecomputer music
Yizhong Wang
Yizhong Wang
University of Washington
Natural Language ProcessingMachine LearningArtificial Intelligence
Pradeep Dasigi
Pradeep Dasigi
Allen Institute for AI (Ai2)
Natural Language ProcessingMachine LearningLanguage Modeling
H
Hanna Hajishirzi