🤖 AI Summary
This study investigates the semantic and syntactic information encoded in proto-tokens and their single-step reconstruction mechanism in non-autoregressive text generation. Focusing on a setting where a frozen large language model reconstructs long texts in one step using two learnable proto-tokens, the work presents the first systematic disentanglement of semantic and syntactic components within proto-tokens. It introduces a relation distillation method that injects batch-level semantic relationships into the proto-token space without compromising reconstruction quality. Through attention visualization, stability analysis, anchor-point loss, and teacher embedding regularization, the study reveals that the m-token carries richer semantic information. This work offers a viable pathway toward structured intermediate representations for non-autoregressive sequence-to-sequence systems.
📝 Abstract
Autoregressive large language models (LLMs) generate text token-by-token, requiring n forward passes to produce a sequence of length n. Recent work, Exploring the Latent Capacity of LLMs for One-Step Text Reconstruction (Mezentsev and Oseledets), shows that frozen LLMs can reconstruct hundreds of tokens from only two learned proto-tokens in a single forward pass, suggesting a path beyond the autoregressive paradigm. In this paper, we study what information these proto-tokens encode and how they behave under reconstruction and controlled constraints. We perform a series of experiments aimed at disentangling semantic and syntactic content in the two proto-tokens, analyzing stability properties of the e-token, and visualizing attention patterns to the e-token during reconstruction. Finally, we test two regularization schemes for"imposing"semantic structure on the e-token using teacher embeddings, including an anchor-based loss and a relational distillation objective. Our results indicate that the m-token tends to capture semantic information more strongly than the e-token under standard optimization; anchor-based constraints trade off sharply with reconstruction accuracy; and relational distillation can transfer batch-level semantic relations into the proto-token space without sacrificing reconstruction quality, supporting the feasibility of future non-autoregressive seq2seq systems that predict proto-tokens as an intermediate representation.