🤖 AI Summary
Standard next-token prediction (NTP) in large language model (LLM) training suffers from suboptimal efficiency and insufficient utilization of semantic information. Method: We propose a mutual information maximization–based target token selection strategy to replace conventional NTP, leveraging information-theoretic principles to identify high-informativeness tokens for prediction. This approach is applied to optimize training across arithmetic reasoning, multi-label text classification, and natural language generation tasks. Contribution/Results: Empirical results demonstrate that the new paradigm significantly improves model performance and generalization without increasing computational cost. The method offers strong theoretical interpretability grounded in information theory, providing a principled, efficient alternative to NTP for LLM pretraining. It bridges methodological innovation with empirical efficacy, advancing the design of training objectives beyond standard autoregressive prediction.
📝 Abstract
Optimizing training performance in large language models (LLMs) remains an essential challenge, particularly in improving model performance while maintaining computational costs. This work challenges the conventional approach of training LLMs using next-token prediction (NTP), arguing that by predicting information-rich tokens during training, there is a more effective way to train LLMs. We investigate the impact of the proposed solution in three kinds of tasks for LLMs: arithmetic, multi-label classification of text, and natural-language generation. This work offers a principled approach to optimizing LLM training, advancing both model performance and theoretical understanding of the target-token selection strategies.