Learning Compositional Transferability of Time Series for Source-Free Domain Adaptation

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses source-free domain adaptation (SFDA) for time-series classification—i.e., enabling robust cross-domain transfer when only a source-pretrained classifier is available, while source data and target labels remain inaccessible. To this end, we propose a hierarchical decoupled reconstruction architecture: a frozen U-Net backbone provides coarse-grained temporal reconstruction; dual branches—source replay and shift compensation—dynamically fuse via learnable weights to decouple adaptation capability from source priors without accessing source data. We further enhance generalization via residual connections, a lightweight autoencoder, and a test-time stability-aware rescaling mechanism. Evaluated on three mainstream time-series benchmarks, our method achieves state-of-the-art performance, significantly outperforming existing SFDA approaches.

Technology Category

Application Category

📝 Abstract
Domain adaptation is challenging for time series classification due to the highly dynamic nature. This study tackles the most difficult subtask when both target labels and source data are inaccessible, namely, source-free domain adaptation. To reuse the classification backbone pre-trained on source data, time series reconstruction is a sound solution that aligns target and source time series by minimizing the reconstruction errors of both. However, simply fine-tuning the source pre-trained reconstruction model on target data may lose the learnt priori, and it struggles to accommodate domain varying temporal patterns in a single encoder-decoder. Therefore, this paper tries to disentangle the composition of domain transferability by using a compositional architecture for time series reconstruction. Here, the preceding component is a U-net frozen since pre-trained, the output of which during adaptation is the initial reconstruction of a given target time series, acting as a coarse step to prompt the subsequent finer adaptation. The following pipeline for finer adaptation includes two parallel branches: The source replay branch using a residual link to preserve the output of U-net, and the offset compensation branch that applies an additional autoencoder (AE) to further warp U-net's output. By deploying a learnable factor on either branch to scale their composition in the final output of reconstruction, the data transferability is disentangled and the learnt reconstructive capability from source data is retained. During inference, aside from the batch-level optimization in the training, we search at test time stability-aware rescaling of source replay branch to tolerate instance-wise variation. The experimental results show that such compositional architecture of time series reconstruction leads to SOTA performance on 3 widely used benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Addresses source-free domain adaptation for time series classification
Disentangles domain transferability via compositional reconstruction architecture
Improves adaptation by preserving source knowledge and compensating offsets
Innovation

Methods, ideas, or system contributions that make the work stand out.

U-net frozen for initial reconstruction
Parallel branches for finer adaptation
Learnable factor scales branch composition
🔎 Similar Papers
No similar papers found.
H
Hankang Sun
Fudan University
G
Guiming Li
Fudan University
Su Yang
Su Yang
Fudan University
Social ComputingUrban MobilityPattern Recognition
B
Baoqi Li
Chinese Academy of Sciences, Institute of Acoustics