Apriel-1.5-15b-Thinker

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational and parameter-count demands of multimodal reasoning models, this paper proposes a lightweight and efficient alternative: a progressive three-stage continual pretraining framework built upon a 15B-parameter open-source architecture—requiring neither de novo pretraining nor reinforcement learning or preference optimization. Methodologically, it integrates data-driven vision-language joint pretraining, synthetic data augmentation, explicit reasoning trajectory supervision, and high-quality instruction fine-tuning. The core contribution lies in substituting model-scale expansion with depth-wise architectural scaling and staged curriculum learning, achieving state-of-the-art performance in mathematical, coding, and scientific reasoning while drastically reducing computational overhead. The resulting model attains an AI Index score of 52—comparable to DeepSeek-R1—and achieves near-parity with Gemini and Claude on the average of ten image understanding benchmarks, all while enabling single-GPU deployment.

Technology Category

Application Category

📝 Abstract
We present Apriel-1.5-15B-Thinker, a 15-billion parameter open-weights multimodal reasoning model that achieves frontier-level performance through training design rather than sheer scale. Starting from Pixtral-12B, we apply a progressive three-stage methodology: (1) depth upscaling to expand reasoning capacity without pretraining from scratch, (2) staged continual pre-training that first develops foundational text and vision understanding, then enhances visual reasoning through targeted synthetic data generation addressing spatial structure, compositional understanding, and fine-grained perception, and (3) high-quality text-only supervised fine-tuning on curated instruction-response pairs with explicit reasoning traces spanning mathematics, coding, science, and tool use. Notably, our model achieves competitive results without reinforcement learning or preference optimization, isolating the contribution of our data-centric continual pre-training approach. On the Artificial Analysis Intelligence Index, Apriel-1.5-15B-Thinker attains a score of 52, matching DeepSeek-R1-0528 despite requiring significantly fewer computational resources. Across ten image benchmarks, its performance is on average within five points of Gemini-2.5-Flash and Claude Sonnet-3.7, a key achievement for a model operating within single-GPU deployment constraints. Our results demonstrate that thoughtful mid-training 2 design can close substantial capability gaps without massive scale, making frontier-level multimodal reasoning accessible to organizations with limited infrastructure. We release the model checkpoint, all training recipes, and evaluation protocols under the MIT license to to advance open-source research.
Problem

Research questions and friction points this paper is trying to address.

Developing multimodal reasoning without massive computational resources
Enhancing visual reasoning through synthetic data generation
Achieving competitive AI performance with single-GPU deployment constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive three-stage methodology for model training
Depth upscaling expands reasoning without full pretraining
Staged pre-training with synthetic data enhances vision understanding
🔎 Similar Papers
No similar papers found.
S
Shruthan Radhakrishna
SLAM Lab, ServiceNow
A
Aman Tiwari
SLAM Lab, ServiceNow
A
Aanjaneya Shukla
SLAM Lab, ServiceNow
Masoud Hashemi
Masoud Hashemi
ServiceNow
LLMTrust & GovernanceMedical signal and image processingCompressed sensing
Rishabh Maheshwary
Rishabh Maheshwary
Applied Scientist, ServiceNow
Machine LearningDeep LearningNatural Language Processing
S
Shiva Krishna Reddy Malay
SLAM Lab, ServiceNow
J
Jash Mehta
SLAM Lab, ServiceNow
P
Pulkit Pattnaik
SLAM Lab, ServiceNow
Saloni Mittal
Saloni Mittal
Applied Research Scientist, Servicenow; Carnegie Mellon University
natural language processingmachine learningLLMs
K
Khalil Slimi
SLAM Lab, ServiceNow
K
Kelechi Ogueji
SLAM Lab, ServiceNow
A
Akintunde Oladipo
SLAM Lab, ServiceNow
S
S. Parikh
SLAM Lab, ServiceNow
Oluwanifemi Bamgbose
Oluwanifemi Bamgbose
University of Waterloo
T
Toby Liang
SLAM Lab, ServiceNow
A
A. Masry
SLAM Lab, ServiceNow
Khyati Mahajan
Khyati Mahajan
University of North Carolina at Charlotte, ServiceNow
natural language processinggenerative AIartificial intelligencemachine learningdeep learning
S
Sai Mudumba
SLAM Lab, ServiceNow
Vikas Yadav
Vikas Yadav
ServiceNow, University of Arizona
Natural Language ProcessingDeep learning
S
Sathwik Tejaswi Madhusudhan
SLAM Lab, ServiceNow
T
Torsten Scholak
SLAM Lab, ServiceNow
S
Sagar Davasam
SLAM Lab, ServiceNow
Srinivas Sunkara
Srinivas Sunkara
Google Deepmind
N
Nicholas Chapados
SLAM Lab, ServiceNow