Beyond Tokens: Concept-Level Training Objectives for LLMs

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a key limitation of conventional large language model training, which relies on token-level next-token prediction and consequently struggles to distinguish between semantically equivalent expressions that differ in surface form, leading to a bias toward superficial patterns rather than deep semantic understanding. To overcome this, the authors propose elevating the training objective to the conceptual level by introducing a systematic concept-level supervision signal through a concept mapping framework—e.g., unifying surface variants like “mom” and “mother” under a shared concept such as MOTHER. Their approach integrates a concept alignment loss with a multi-surface aggregation strategy, encouraging the model to prioritize semantic correctness. Experiments demonstrate that the resulting concept-aware models achieve lower perplexity, superior performance across multiple NLP benchmarks, and enhanced robustness in domain transfer scenarios.

Technology Category

Application Category

📝 Abstract
The next-token prediction (NTP) objective has been foundational in the development of modern large language models (LLMs), driving advances in fluency and generalization. However, NTP operates at the \textit{token} level, treating deviations from a single reference continuation as errors even when alternative continuations are equally plausible or semantically equivalent (e.g., ``mom''vs. ``mother''). As a result, token-level loss can penalize valid abstractions, paraphrases, or conceptually correct reasoning paths, biasing models toward surface form rather than underlying meaning. This mismatch between the training signal and semantic correctness motivates learning objectives that operate over higher-level representations. We propose a shift from token-level to concept-level prediction, where concepts group multiple surface forms of the same idea (e.g., ``mom,''``mommy,''``mother''$\rightarrow$ \textit{MOTHER}). We introduce various methods for integrating conceptual supervision into LLM training and show that concept-aware models achieve lower perplexity, improved robustness under domain shift, and stronger performance than NTP-based models on diverse NLP benchmarks. This suggests \textit{concept-level supervision} as an improved training signal that better aligns LLMs with human semantic abstractions.
Problem

Research questions and friction points this paper is trying to address.

next-token prediction
semantic equivalence
concept-level training
surface form bias
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

concept-level supervision
next-token prediction
semantic abstraction
large language models
training objective
🔎 Similar Papers
No similar papers found.