Next Concept Prediction in Discrete Latent Space Leads to Stronger Language Models

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional language models rely on next-token prediction (NTP), which struggles to capture semantic units spanning multiple tokens, thereby limiting pretraining effectiveness. This work proposes a novel paradigm—next-concept prediction (NCP)—that, for the first time, employs discrete semantic concepts as the pretraining objective. By constructing a concept vocabulary via vector quantization, NCP predicts multi-token semantic units in a discrete latent space and is jointly trained with NTP in a framework termed ConceptLM. The approach demonstrates consistent gains across model scales ranging from 70M to 1.5B parameters and even on Llama-8B, outperforming baselines on all 13 evaluated benchmarks after continued pretraining. These results substantiate the novelty, efficacy, and scalability of the proposed NCP framework.

Technology Category

Application Category

📝 Abstract
We propose Next Concept Prediction (NCP), a generative pretraining paradigm built on top of Next Token Prediction (NTP). NCP predicts discrete concepts that span multiple tokens, thereby forming a more challenging pretraining objective. Our model, ConceptLM, quantizes hidden states using Vector Quantization and constructs a concept vocabulary. It leverages both NCP and NTP to drive parameter updates and generates a concept to guide the generation of the following tokens. We train ConceptLM from scratch at scales ranging from 70M to 1.5B parameters with up to 300B training data, including Pythia and GPT-2 backbones. Results on 13 benchmarks show that NCP yields consistent performance gains over traditional token-level models. Furthermore, continual pretraining experiments on an 8B-parameter Llama model indicate that NCP can further improve an NTP-trained model. Our analysis suggests that NCP leads to more powerful language models by introducing a harder pretraining task, providing a promising path toward better language modeling.
Problem

Research questions and friction points this paper is trying to address.

Next Concept Prediction
Language Models
Pretraining
Discrete Latent Space
Next Token Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Next Concept Prediction
Vector Quantization
Discrete Latent Space
Generative Pretraining
Concept Vocabulary
🔎 Similar Papers
No similar papers found.
Y
Yuliang Liu
LUMIA Lab, School of Artificial Intelligence, Shanghai Jiao Tong University; Shanghai AI Laboratory; Shanghai Innovation Institute
Yunchong Song
Yunchong Song
Ph.D. student, Shanghai Jiao Tong University
Machine Learning
Y
Yixuan Wang
LUMIA Lab, School of Artificial Intelligence, Shanghai Jiao Tong University; Shanghai AI Laboratory; Shanghai Innovation Institute
K
Kewen Ge
LUMIA Lab, School of Artificial Intelligence, Shanghai Jiao Tong University
Alex Lamb
Alex Lamb
Tsinghua University, Microsoft Research (NYC), Université de Montréal, Google Brain
Machine LearningForecastingComputer VisionRecurrent Neural NetworksGenerative Models
Qipeng Guo
Qipeng Guo
Fudan University
Kai Chen
Kai Chen
Shanghai AI Laboratory
LLMVLMComputer Vision
Bowen Zhou
Bowen Zhou
Chair Professor, Department of Electrical Engineering, Tsinghua University; Founder of Frontis.ai
Machine LearningNatural Language ProcessingRepresentation Learning and ReasoningConversational
Z
Zhouhan Lin
LUMIA Lab, School of Artificial Intelligence, Shanghai Jiao Tong University; Shanghai AI Laboratory