🤖 AI Summary
Large language models (LLMs) suffer from temporal misalignment in millennium-scale reasoning, primarily due to sparse long-range temporal signals in training data, leading to inaccurate time representations and catastrophic forgetting. To address this, we propose a cyclical time modeling framework inspired by the traditional Chinese sexagenary cycle (Gānzhī), which maps Gregorian years onto a 60-year periodic sequence. We introduce polar-coordinate spatiotemporal encoding, enhanced positional encoding, and post-training representation alignment to jointly refine temporal semantics. This work is the first to systematically integrate the sexagenary cycle into LLM temporal representation learning. Evaluated on a newly constructed long-horizon temporal reasoning benchmark, our method achieves a +18.7% improvement in temporal reasoning accuracy, substantially mitigating chronological confusion and knowledge discontinuity across millennia. The approach provides an interpretable, generalizable, and structurally grounded solution for long-term temporal understanding in foundation models.
📝 Abstract
Large language models (LLMs) suffer from temporal misalignment issues especially across long span of time. The issue arises from knowing that LLMs are trained on large amounts of data where temporal information is rather sparse over long times, such as thousands of years, resulting in insufficient learning or catastrophic forgetting by the LLMs. This paper proposes a methodology named"Ticktack"for addressing the LLM's long-time span misalignment in a yearly setting. Specifically, we first propose to utilize the sexagenary year expression instead of the Gregorian year expression employed by LLMs, achieving a more uniform distribution in yearly granularity. Then, we employ polar coordinates to model the sexagenary cycle of 60 terms and the year order within each term, with additional temporal encoding to ensure LLMs understand them. Finally, we present a temporal representational alignment approach for post-training LLMs that effectively distinguishes time points with relevant knowledge, hence improving performance on time-related tasks, particularly over a long period. We also create a long time span benchmark for evaluation. Experimental results prove the effectiveness of our proposal.