🤖 AI Summary
Language models suffer from performance degradation due to temporal distribution shift, yet continual fine-tuning is computationally expensive. This paper proposes an unsupervised representation editing method that requires neither weight updates nor labeled target-period data or timestamps. It automatically extracts guiding direction vectors from unlabeled historical data and dynamically calibrates the model’s latent-space representations to achieve cross-temporal alignment. The core contribution is the first realization of fully fine-tuning-free and time-information-free temporal adaptation—demonstrating strong robustness even when target-period data is entirely unavailable. Extensive experiments across multiple time-sensitive downstream tasks show significant performance gains, with negligible inference overhead. The approach establishes a new lightweight paradigm for adapting language models in time-critical applications.
📝 Abstract
Language models often struggle with temporal misalignment, performance degradation caused by shifts in the temporal distribution of data. Continuously updating models to avoid degradation is expensive. Can models be adapted without updating model weights? We present TARDIS, an unsupervised representation editing method that addresses this challenge. TARDIS extracts steering vectors from unlabeled data and adjusts the model's representations to better align with the target time period's distribution. Our experiments reveal that TARDIS enhances downstream task performance without the need for fine-tuning, can mitigate temporal misalignment even when exact target time period data is unavailable, and remains efficient even when the temporal information of the target data points is unknown at inference time.