2 OLMo 2 Furious

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address bottlenecks in training stability, computational efficiency, and generalization of open-source large language models (LLMs), this work introduces OLMo 2—a new family of fully open-stack LLMs (7B/13B). Methodologically, we propose Dolmino Mix 1124, a novel late-stage curriculum pretraining strategy that improves per-token computational efficiency; Verifiable Reward Reinforcement Learning (RLVR), a framework enabling high-fidelity instruction tuning with rigorous reward verification; and an enhanced dense autoregressive architecture coupled with a relaxed data-instruction tuning paradigm. Our contributions are threefold: (1) full-stack openness—including datasets, code, training logs, and checkpoints; (2) base models achieving Pareto-optimal performance under equivalent compute budgets, significantly outperforming Llama 3.1 and Qwen 2.5; and (3) instruction-tuned models matching or exceeding the performance of leading open models such as Gemma 2. Collectively, OLMo 2 advances reproducible, verifiable, and transparent LLM research.

Technology Category

Application Category

📝 Abstract
We present OLMo 2, the next generation of our fully open language models. OLMo 2 includes dense autoregressive models with improved architecture and training recipe, pretraining data mixtures, and instruction tuning recipes. Our modified model architecture and training recipe achieve both better training stability and improved per-token efficiency. Our updated pretraining data mixture introduces a new, specialized data mix called Dolmino Mix 1124, which significantly improves model capabilities across many downstream task benchmarks when introduced via late-stage curriculum training (i.e. specialized data during the annealing phase of pretraining). Finally, we incorporate best practices from T""ulu 3 to develop OLMo 2-Instruct, focusing on permissive data and extending our final-stage reinforcement learning with verifiable rewards (RLVR). Our OLMo 2 base models sit at the Pareto frontier of performance to compute, often matching or outperforming open-weight only models like Llama 3.1 and Qwen 2.5 while using fewer FLOPs and with fully transparent training data, code, and recipe. Our fully open OLMo 2-Instruct models are competitive with or surpassing open-weight only models of comparable size, including Qwen 2.5, Llama 3.1 and Gemma 2. We release all OLMo 2 artifacts openly -- models at 7B and 13B scales, both pretrained and post-trained, including their full training data, training code and recipes, training logs and thousands of intermediate checkpoints. The final instruction model is available on the Ai2 Playground as a free research demo.
Problem

Research questions and friction points this paper is trying to address.

Efficient Language Model
Natural Language Processing
Transparency and Reproducibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

OLMo 2
Advanced Model Design
Efficient Training
🔎 Similar Papers
No similar papers found.
P
Pete Walsh
Allen Institute for AI, University of Washington
Luca Soldaini
Luca Soldaini
Allen Institute for AI
Large Language ModelsOpen Source AIInformation Retrieval
Dirk Groeneveld
Dirk Groeneveld
Allen Institute for Artificial Intelligence
natural language processingneural networksdeep learning
Kyle Lo
Kyle Lo
Allen Institute for AI
natural language processingmachine learninghuman computer interactionstatistics
S
Shane Arora
Allen Institute for AI, University of Washington
A
Akshita Bhagia
Allen Institute for AI, University of Washington
Y
Yuling Gu
Allen Institute for AI, University of Washington
Shengyi Huang
Shengyi Huang
Allen Institute for Artificial Intelligence
Artificial IntelligenceReinforcement Learning
Matt Jordan
Matt Jordan
Graduate Research Assistant, UT Austin
Adversarial Examples
Nathan Lambert
Nathan Lambert
Research Scientist, Allen AI
Reinforcement LearningMachine LearningRoboticsResponsible AI
D
Dustin Schwenk
Allen Institute for AI, University of Washington
O
Oyvind Tafjord
Allen Institute for AI, University of Washington
T
Taira Anderson
Allen Institute for AI, University of Washington
D
David Atkinson
Allen Institute for AI, University of Washington
Faeze Brahman
Faeze Brahman
Research Scientist; Allen Institute for AI (Ai2)
Natural Language ProcessingMachine LearningAI AlignmentHuman-Centered AI
C
Christopher Clark
Allen Institute for AI, University of Washington
Pradeep Dasigi
Pradeep Dasigi
Allen Institute for AI (Ai2)
Natural Language ProcessingMachine LearningLanguage Modeling
Nouha Dziri
Nouha Dziri
Allen Institute for AI (Ai2)
Artificial IntelligenceNatural Language Processing
M
Michal Guerquin
Allen Institute for AI, University of Washington
Hamish Ivison
Hamish Ivison
University of Washington
Natural Language Processing
Pang Wei Koh
Pang Wei Koh
University of Washington; Allen Institute for AI
Machine learningNatural language processingComputational biology
J
Jiacheng Liu
Allen Institute for AI, University of Washington
Saumya Malik
Saumya Malik
Allen Institute for AI
William Merrill
William Merrill
Ai2 / TTIC
language modelsformal languagescomputational linguisticsdeep learning
L
Lester James Validad Miranda
Allen Institute for AI, University of Washington
J
Jacob Daniel Morrison
Allen Institute for AI, University of Washington
T
Tyler C. Murray
Allen Institute for AI, University of Washington
C
Crystal Nam
Allen Institute for AI, University of Washington
Valentina Pyatkin
Valentina Pyatkin
Allen Institute for AI & University of Washington
NLPGenerative AILanguage ModelingResponsible AIML
Aman Rangapur
Aman Rangapur
Allen Institute for Artificial Intelligence
Generative AINatural Language ProgrammingAdversarial Learning
M
Michael Schmitz
Allen Institute for AI, University of Washington
S
Sam Skjonsberg
Allen Institute for AI, University of Washington
David Wadden
David Wadden
Google Deepmind
Natural Language ProcessingMachine Learning
C
Chris Wilhelm
Allen Institute for AI, University of Washington
M
Michael Wilson
Allen Institute for AI, University of Washington
L
Luke S. Zettlemoyer
University of Washington
A
Ali Farhadi
Allen Institute for AI, University of Washington
Noah A. Smith
Noah A. Smith
University of Washington; Allen Institute for Artificial Intelligence
natural language processingmachine learningcomputational social sciencecomputer music
H
Hanna Hajishirzi
Allen Institute for AI, University of Washington