Training LLMs Beyond Next Token Prediction - Filling the Mutual Information Gap

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard next-token prediction (NTP) in large language model (LLM) training suffers from suboptimal efficiency and insufficient utilization of semantic information. Method: We propose a mutual information maximization–based target token selection strategy to replace conventional NTP, leveraging information-theoretic principles to identify high-informativeness tokens for prediction. This approach is applied to optimize training across arithmetic reasoning, multi-label text classification, and natural language generation tasks. Contribution/Results: Empirical results demonstrate that the new paradigm significantly improves model performance and generalization without increasing computational cost. The method offers strong theoretical interpretability grounded in information theory, providing a principled, efficient alternative to NTP for LLM pretraining. It bridges methodological innovation with empirical efficacy, advancing the design of training objectives beyond standard autoregressive prediction.

Technology Category

Application Category

📝 Abstract
Optimizing training performance in large language models (LLMs) remains an essential challenge, particularly in improving model performance while maintaining computational costs. This work challenges the conventional approach of training LLMs using next-token prediction (NTP), arguing that by predicting information-rich tokens during training, there is a more effective way to train LLMs. We investigate the impact of the proposed solution in three kinds of tasks for LLMs: arithmetic, multi-label classification of text, and natural-language generation. This work offers a principled approach to optimizing LLM training, advancing both model performance and theoretical understanding of the target-token selection strategies.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM training beyond next-token prediction
Improving model performance while controlling computational costs
Developing better target-token selection strategies for training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replacing next-token prediction with information-rich token prediction
Applying new training to arithmetic and classification tasks
Developing principled target-token selection strategies for LLMs
🔎 Similar Papers
No similar papers found.
C
Chun-Hao Yang
National Taiwan University
B
Bo-Han Feng
National Taiwan University
T
Tzu-Yuan Lai
National Taiwan University
Y
Yan Yu Chen
National Taiwan University
Y
Yin-Kai Dean Huang
National Taiwan University
Shou-De Lin
Shou-De Lin
National Taiwan University
AImachine learningnatural language processing