Spelling Bee Embeddings for Language Modeling

πŸ“… 2026-01-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional language models typically employ word embeddings that disregard explicit spelling information, thereby limiting their performance on spelling-sensitive tasks and overall language understanding. This work proposes a spelling-aware embedding layer that directly injects the character-level orthographic structure of tokens into their word embeddings, enhancing the model’s sensitivity to lexical morphology without increasing parameter count or computational complexity. Experimental evaluation and scaling law analysis across language models ranging from 40M to 800M parameters demonstrate consistent performance gains on standard language modeling benchmarks. Specifically, the proposed approach achieves equivalent test loss with approximately 8% less computation and training data, effectively improving resource efficiency.

Technology Category

Application Category

πŸ“ Abstract
We introduce a simple modification to the embedding layer. The key change is to infuse token embeddings with information about their spelling. Models trained with these embeddings improve not only on spelling, but also across standard benchmarks. We conduct scaling studies for models with 40M to 800M parameters, which suggest that the improvements are equivalent to needing about 8% less compute and data to achieve the same test loss.
Problem

Research questions and friction points this paper is trying to address.

spelling
language modeling
embeddings
token representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

spelling-aware embeddings
language modeling
embedding layer modification
scaling laws
data efficiency
πŸ”Ž Similar Papers
No similar papers found.
M
Markus N. Rabe
Sutter Hill Ventures
Judith Clymo
Judith Clymo
University Of California, Santa Cruz
Z
Zheren Dong
Independent Researcher