🤖 AI Summary
This work addresses the challenge of time-series forecasting in federated learning settings, where heterogeneity in temporal granularity and variable sets across nodes hinders effective collaboration. To tackle this issue, the authors propose PiXTime, a novel framework that employs personalized patch embedding to unify multi-granularity time-series representations and introduces a global Variable Embedding (VE) table to align semantic meanings of variables across nodes. Building upon this unified representation, PiXTime integrates a shared Transformer architecture with cross-attention mechanisms to enable efficient and collaborative modeling. Evaluated under realistic federated conditions, PiXTime achieves state-of-the-art performance on eight real-world time-series benchmarks, demonstrating its effectiveness in overcoming the challenges posed by heterogeneity in federated time-series forecasting.
📝 Abstract
Time series are highly valuable and rarely shareable across nodes, making federated learning a promising paradigm to leverage distributed temporal data. However, different sampling standards lead to diverse time granularities and variable sets across nodes, hindering classical federated learning. We propose PiXTime, a novel time series forecasting model designed for federated learning that enables effective prediction across nodes with multi-granularity and heterogeneous variable sets. PiXTime employs a personalized Patch Embedding to map node-specific granularity time series into token sequences of a unified dimension for processing by a subsequent shared model, and uses a global VE Table to align variable category semantics across nodes, thereby enhancing cross-node transferability. With a transformer-based shared model, PiXTime captures representations of auxiliary series with arbitrary numbers of variables and uses cross-attention to enhance the prediction of the target series. Experiments show PiXTime achieves state-of-the-art performance in federated settings and demonstrates superior performance on eight widely used real-world traditional benchmarks.