🤖 AI Summary
Standard Transformers lack an intrinsic mechanism to compress historical information into compact latent states under the conventional next-token prediction objective, limiting generalization. To address this, we propose NextLat: a method that introduces a self-supervised “next latent state prediction” auxiliary objective while preserving the original architecture. This objective guides latent variables to converge toward theoretically grounded belief states, thereby inducing an internal world model with explicit state transitions and recurrent inductive bias. NextLat requires no modifications to the attention mechanism; instead, it jointly optimizes language modeling and latent dynamics prediction to yield interpretable, highly compressed historical representations. Experiments demonstrate that NextLat significantly outperforms baselines across world modeling, multi-step reasoning, long-horizon planning, and language understanding tasks—achieving consistent improvements in downstream accuracy, representation compression ratio, and far-sighted planning capability.
📝 Abstract
Transformers replace recurrence with a memory that grows with sequence length and self-attention that enables ad-hoc look ups over past tokens. Consequently, they lack an inherent incentive to compress history into compact latent states with consistent transition rules. This often leads to learning solutions that generalize poorly. We introduce Next-Latent Prediction (NextLat), which extends standard next-token training with self-supervised predictions in the latent space. Specifically, NextLat trains a transformer to learn latent representations that are predictive of its next latent state given the next output token. Theoretically, we show that these latents provably converge to belief states, compressed information of the history necessary to predict the future. This simple auxiliary objective also injects a recurrent inductive bias into transformers, while leaving their architecture, parallel training, and inference unchanged. NextLat effectively encourages the transformer to form compact internal world models with its own belief states and transition dynamics -- a crucial property absent in standard next-token prediction transformers. Empirically, across benchmarks targeting core sequence modeling competencies -- world modeling, reasoning, planning, and language modeling -- NextLat demonstrates significant gains over standard next-token training in downstream accuracy, representation compression, and lookahead planning. NextLat stands as a simple and efficient paradigm for shaping transformer representations toward stronger generalization.