ToW: Thoughts of Words Improve Reasoning in Large Language Models

📅 2024-10-21
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Addressing two fundamental limitations in large language model (LLM) inference—severe factual hallucination and inefficient implicit reasoning—this paper proposes Thoughts of Words (ToW). ToW explicitly reformulates standard next-word prediction into a fine-grained, word-level reasoning task during pretraining, requiring the model to articulate *why* a given word is generated and how it semantically relates to the context. This constitutes the first paradigm shift from opaque token prediction to interpretable, stepwise reasoning, achieving task-agnosticity and zero semantic bias. ToW annotations are automatically generated via knowledge distillation, enabling lightweight continual pretraining on only 70K samples. Empirical evaluation demonstrates consistent improvements across diverse reasoning benchmarks: average performance gains of 7–9%, with up to a 10% reduction in hallucination rate.

Technology Category

Application Category

📝 Abstract
We introduce thoughts of words (ToW), a novel training-time data-augmentation method for next-word prediction. ToW views next-word prediction as a core reasoning task and injects fine-grained thoughts explaining what the next word should be and how it is related to the previous contexts in pre-training texts. Our formulation addresses two fundamental drawbacks of existing next-word prediction learning schemes: they induce factual hallucination and are inefficient for models to learn the implicit reasoning processes in raw texts. While there are many ways to acquire such thoughts of words, we explore the first step of acquiring ToW annotations through distilling from larger models. After continual pre-training with only 70K ToW annotations, we effectively improve models' reasoning performances by 7% to 9% on average and reduce model hallucination by up to 10%. At the same time, ToW is entirely agnostic to tasks and applications, introducing no additional biases on labels or semantics.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Inference Accuracy
Efficiency of Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

ToW (Word Thinking)
Enhanced Reasoning Capability
Predictive Accuracy Improvement