🤖 AI Summary
This work investigates the character-level representation mechanisms underlying large language models’ (LLMs) high token-spelling accuracy. We identify a critical representational deficiency: the embedding layer severely under-encodes characters following the first one, forcing the model to dynamically reconstruct character knowledge in middle-to-higher Transformer layers—spelling capability emerges abruptly at specific “breakthrough layers,” rather than propagating incrementally across layers. To systematically validate this phenomenon, we propose a three-tier analytical framework: (1) interpretable probing classifiers to quantify character information per layer; (2) knowledge neuron localization to identify critical computational units; and (3) joint attention and inter-layer representation analysis to trace information reconstruction pathways. Our study is the first to rigorously demonstrate the counterintuitive nature of LLM spelling—coexisting representational incompleteness and emergent capability—establishing a novel paradigm for understanding foundational symbolic operations in large models.
📝 Abstract
Large language models (LLMs) can spell out tokens character by character with high accuracy, yet they struggle with more complex character-level tasks, such as identifying compositional subcomponents within tokens. In this work, we investigate how LLMs internally represent and utilize character-level information during the spelling-out process. Our analysis reveals that, although spelling out is a simple task for humans, it is not handled in a straightforward manner by LLMs. Specifically, we show that the embedding layer does not fully encode character-level information, particularly beyond the first character. As a result, LLMs rely on intermediate and higher Transformer layers to reconstruct character-level knowledge, where we observe a distinct"breakthrough"in their spelling behavior. We validate this mechanism through three complementary analyses: probing classifiers, identification of knowledge neurons, and inspection of attention weights.