🤖 AI Summary
This work addresses the low sample efficiency, poor interpretability, and weak controllability of large language models (LLMs) arising from sole reliance on discrete next-token prediction during pretraining. We propose the first end-to-end jointly optimized continuous concept pretraining framework. Our method employs sparse autoencoders to extract interpretable semantic concepts from hidden states and dynamically interleaves continuous concept prediction objectives with the standard language modeling objective at intermediate transformer layers, enabling concept-driven representation learning. Compared to baselines—including standard next-token prediction, knowledge distillation, and stop-token approaches—our framework achieves significant improvements in both language modeling perplexity and downstream reasoning tasks, with over 30% higher training sample efficiency. Moreover, it enables concept-level visualization and intervention, substantially enhancing model interpretability and controllability.
📝 Abstract
Next token prediction has been the standard training objective used in large language model pretraining. Representations are learned as a result of optimizing for token-level perplexity. We propose Continuous Concept Mixing (CoCoMix), a novel pretraining framework that combines discrete next token prediction with continuous concepts. Specifically, CoCoMix predicts continuous concepts learned from a pretrained sparse autoencoder and mixes them into the model's hidden state by interleaving with token hidden representations. Through experiments on multiple benchmarks, including language modeling and downstream reasoning tasks, we show that CoCoMix is more sample efficient and consistently outperforms standard next token prediction, knowledge distillation and inserting pause tokens. We find that combining both concept learning and interleaving in an end-to-end framework is critical to performance gains. Furthermore, CoCoMix enhances interpretability and steerability by allowing direct inspection and modification of the predicted concept, offering a transparent way to guide the model's internal reasoning process.