Scaling Up Temporal Domain Generalization via Temporal Experts Averaging

πŸ“… 2025-09-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the generalization challenge under temporal distribution shifts (e.g., lexical evolution), this paper proposes Temporal Experts Averaging (TEA), an efficient alternative to full-model future-weight prediction. TEA constructs temporally specialized, functionally complementary experts with similar parameters via constrained fine-tuning; models their weight trajectories in a principal-component subspace; and adaptively fuses them to balance bias and variance. Crucially, TEA avoids predicting full future model weights, substantially reducing computational overhead. Evaluated across seven benchmarks, five model architectures, and two temporal settings, TEA outperforms state-of-the-art methods by up to 69% in accuracy while accelerating inference by up to 60Γ—β€”achieving, for the first time, both high accuracy and high efficiency in time-domain generalization.

Technology Category

Application Category

πŸ“ Abstract
Temporal Domain Generalization (TDG) aims to generalize across temporal distribution shifts, e.g., lexical change over time. Prior work often addresses this by predicting future model weights. However, full model prediction is prohibitively expensive for even reasonably sized models. Thus, recent methods only predict the classifier layer, limiting generalization by failing to adjust other model components. To address this, we propose Temporal Experts Averaging (TEA), a novel and scalable TDG framework that updates the entire model using weight averaging to maximize generalization potential while minimizing computational costs. Our theoretical analysis guides us to two steps that enhance generalization to future domains. First, we create expert models with functional diversity yet parameter similarity by fine-tuning a domain-agnostic base model on individual temporal domains while constraining weight changes. Second, we optimize the bias-variance tradeoff through adaptive averaging coefficients derived from modeling temporal weight trajectories in a principal component subspace. Expert's contributions are based on their projected proximity to future domains. Extensive experiments across 7 TDG benchmarks, 5 models, and 2 TDG settings shows TEA outperforms prior TDG methods by up to 69% while being up to 60x more efficient.
Problem

Research questions and friction points this paper is trying to address.

Scalable framework for temporal domain generalization
Addresses limitations of partial model weight prediction
Enhances generalization via constrained fine-tuning and adaptive averaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes base model on temporal domains with weight constraints
Averages expert models using adaptive coefficients for optimization
Projects weight trajectories in subspace to estimate future proximity
πŸ”Ž Similar Papers
No similar papers found.