Apriel-Nemotron-15B-Thinker

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitively high memory and computational overhead of large language models (LLMs) in enterprise deployment, this paper proposes an efficient inference model construction paradigm. Our method comprises a four-stage training pipeline: base model expansion, continued pretraining, supervised fine-tuning, and GRPO-based reinforcement learning—yielding a 15B-parameter model that matches or exceeds the performance of 32B baselines. Experiments demonstrate a 50% reduction in memory footprint and significantly improved inference latency, while maintaining or surpassing state-of-the-art models—including o1-mini and QWQ-32B—on diverse benchmarks spanning code generation and mathematical reasoning. This work empirically validates the feasibility of “small-parameter, high-efficiency” LLM architectures for enterprise applications, providing a reproducible technical pathway and empirical evidence for lightweight LLM deployment.

Technology Category

Application Category

📝 Abstract
While large language models (LLMs) have achieved remarkable reasoning capabilities across domains like code, math and other enterprise tasks, their significant memory and computational costs often preclude their use in practical enterprise settings. To this end, we introduce Apriel-Nemotron-15B-Thinker, a 15-billion parameter model in the ServiceNow Apriel SLM series that achieves performance against medium sized state-of-the-art models such as o1-mini, QWQ32B, and EXAONE-Deep-32B while maintaining only half the memory footprint of those alternatives. Apriel-Nemotron-15B-Thinker model is trained in a four stage training pipeline including 1) Base Model upscaling, 2) Continual Pre-training 3) Supervised Fine-tuning (SFT) and 4) Reinforcement Learning using GRPO. Comprehensive evaluations across a diverse suite of benchmarks consistently demonstrate that our Apriel-Nemotron-15B-Thinker model matches or exceeds the performance of its 32-billion parameter counterparts, despite being less than half their size.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory and computational costs of large language models
Matching performance of larger models with smaller size
Optimizing model training pipeline for enterprise efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

15B parameter model with half memory footprint
Four-stage training pipeline for efficiency
Matches performance of larger 32B models
🔎 Similar Papers
No similar papers found.
S
Shruthan Radhakrishna
ServiceNow
Soham Parikh
Soham Parikh
Numos AI
G
Gopal Sarda
ServiceNow
A
Anil Turkkan
ServiceNow
Q
Quaizar Vohra
ServiceNow
Raymond Li
Raymond Li
University of British Columbia
Natural Language ProcessingInterpretability
D
Dhruv Jhamb
ServiceNow
K
Kelechi Ogueji
ServiceNow
A
Aanjaneya Shukla
ServiceNow
Oluwanifemi Bamgbose
Oluwanifemi Bamgbose
University of Waterloo
T
Toby Liang
ServiceNow
Luke Kumar
Luke Kumar
Senior Applied Research Scientist at ServiceNow Research
Natural Language ProcessingSurvival AnalysisReinforcement Learning
Oleksiy Ostapenko
Oleksiy Ostapenko
University of Montreal, MILA
Lifelong LearningMachine Learning
S
Shiva Krishna Reddy Malay
ServiceNow
A
Aman Tiwari
ServiceNow
T
Tara Bogavelli
ServiceNow
Vikas Yadav
Vikas Yadav
ServiceNow, University of Arizona
Natural Language ProcessingDeep learning
J
Jash Mehta
ServiceNow
Saloni Mittal
Saloni Mittal
Applied Research Scientist, Servicenow; Carnegie Mellon University
natural language processingmachine learningLLMs
A
Akshay Kalkunte
ServiceNow
P
Pulkit Pattnaik
ServiceNow
K
Khalil Slimi
ServiceNow
A
Anirudh Sreeram
ServiceNow
J
Jishnu Nair
ServiceNow
A
Akintunde Oladipo
ServiceNow