π€ AI Summary
Traditional language models typically employ word embeddings that disregard explicit spelling information, thereby limiting their performance on spelling-sensitive tasks and overall language understanding. This work proposes a spelling-aware embedding layer that directly injects the character-level orthographic structure of tokens into their word embeddings, enhancing the modelβs sensitivity to lexical morphology without increasing parameter count or computational complexity. Experimental evaluation and scaling law analysis across language models ranging from 40M to 800M parameters demonstrate consistent performance gains on standard language modeling benchmarks. Specifically, the proposed approach achieves equivalent test loss with approximately 8% less computation and training data, effectively improving resource efficiency.
π Abstract
We introduce a simple modification to the embedding layer. The key change is to infuse token embeddings with information about their spelling. Models trained with these embeddings improve not only on spelling, but also across standard benchmarks. We conduct scaling studies for models with 40M to 800M parameters, which suggest that the improvements are equivalent to needing about 8% less compute and data to achieve the same test loss.