🤖 AI Summary
This work addresses source-free domain adaptation (SFDA) for time-series classification—i.e., enabling robust cross-domain transfer when only a source-pretrained classifier is available, while source data and target labels remain inaccessible. To this end, we propose a hierarchical decoupled reconstruction architecture: a frozen U-Net backbone provides coarse-grained temporal reconstruction; dual branches—source replay and shift compensation—dynamically fuse via learnable weights to decouple adaptation capability from source priors without accessing source data. We further enhance generalization via residual connections, a lightweight autoencoder, and a test-time stability-aware rescaling mechanism. Evaluated on three mainstream time-series benchmarks, our method achieves state-of-the-art performance, significantly outperforming existing SFDA approaches.
📝 Abstract
Domain adaptation is challenging for time series classification due to the highly dynamic nature. This study tackles the most difficult subtask when both target labels and source data are inaccessible, namely, source-free domain adaptation. To reuse the classification backbone pre-trained on source data, time series reconstruction is a sound solution that aligns target and source time series by minimizing the reconstruction errors of both. However, simply fine-tuning the source pre-trained reconstruction model on target data may lose the learnt priori, and it struggles to accommodate domain varying temporal patterns in a single encoder-decoder. Therefore, this paper tries to disentangle the composition of domain transferability by using a compositional architecture for time series reconstruction. Here, the preceding component is a U-net frozen since pre-trained, the output of which during adaptation is the initial reconstruction of a given target time series, acting as a coarse step to prompt the subsequent finer adaptation. The following pipeline for finer adaptation includes two parallel branches: The source replay branch using a residual link to preserve the output of U-net, and the offset compensation branch that applies an additional autoencoder (AE) to further warp U-net's output. By deploying a learnable factor on either branch to scale their composition in the final output of reconstruction, the data transferability is disentangled and the learnt reconstructive capability from source data is retained. During inference, aside from the batch-level optimization in the training, we search at test time stability-aware rescaling of source replay branch to tolerate instance-wise variation. The experimental results show that such compositional architecture of time series reconstruction leads to SOTA performance on 3 widely used benchmarks.