🤖 AI Summary
How large language models encode and leverage temporal information to achieve historically coherent reasoning remains unclear. This work proposes the Time Travel Engine (TTE), which directly modulates latent representations by constructing a shared, continuous temporal manifold within the residual stream, precisely aligning linguistic style, vocabulary, and conceptual content with target historical periods. We reveal for the first time that temporal information in the latent space of large language models exhibits a continuous, traversable geometric structure, and further discover that the temporal subspaces of Chinese and English are topologically isomorphic—suggesting that historical evolution follows a universal geometric logic across languages. TTE enables fluent navigation of the “spirit of the age” across diverse model architectures, significantly suppressing future knowledge leakage and demonstrating broad applicability and effectiveness.
📝 Abstract
Time functions as a fundamental dimension of human cognition, yet the mechanisms by which Large Language Models (LLMs) encode chronological progression remain opaque. We demonstrate that temporal information in their latent space is organized not as discrete clusters but as a continuous, traversable geometry. We introduce the Time Travel Engine (TTE), an interpretability-driven framework that projects diachronic linguistic patterns onto a shared chronological manifold. Unlike surface-level prompting, TTE directly modulates latent representations to induce coherent stylistic, lexical, and conceptual shifts aligned with target eras. By parameterizing diachronic evolution as a continuous manifold within the residual stream, TTE enables fluid navigation through period-specific"zeitgeists"while restricting access to future knowledge. Furthermore, experiments across diverse architectures reveal topological isomorphism between the temporal subspaces of Chinese and English-indicating that distinct languages share a universal geometric logic of historical evolution. These findings bridge historical linguistics with mechanistic interpretability, offering a novel paradigm for controlling temporal reasoning in neural networks.