Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement

๐Ÿ“… 2024-10-03
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of balancing computational efficiency and performance preservation in source-free time-series domain adaptation (SFDA) on resource-constrained edge devices, this paper proposes a Tucker decomposition-based weight decoupling and reparameterization method: model compression is performed during source-domain training, while only low-dimensional subspace factors are fine-tuned on the target domain. This work is the first to introduce parameter subspace decoupling into time-series SFDA and, by integrating PAC-Bayesian theory, formally characterizes the implicit capacity constraint underlying selective fine-tuningโ€”thereby unifying efficient adaptation with theoretical generalization guarantees. Experiments demonstrate that the method reduces fine-tuning parameters and inference MACs by over 90%, significantly shrinks model size, preserves accuracy without degradation, and maintains compatibility with mainstream SFDA frameworks.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we propose a framework for efficient Source-Free Domain Adaptation (SFDA) in the context of time-series, focusing on enhancing both parameter efficiency and data-sample utilization. Our approach introduces an improved paradigm for source-model preparation and target-side adaptation, aiming to enhance training efficiency during target adaptation. Specifically, we reparameterize the source model's weights in a Tucker-style decomposed manner, factorizing the model into a compact form during the source model preparation phase. During target-side adaptation, only a subset of these decomposed factors is fine-tuned, leading to significant improvements in training efficiency. We demonstrate using PAC Bayesian analysis that this selective fine-tuning strategy implicitly regularizes the adaptation process by constraining the model's learning capacity. Furthermore, this re-parameterization reduces the overall model size and enhances inference efficiency, making the approach particularly well suited for resource-constrained devices. Additionally, we demonstrate that our framework is compatible with various SFDA methods and achieves significant computational efficiency, reducing the number of fine-tuned parameters and inference overhead in terms of MACs by over 90% while maintaining model performance.
Problem

Research questions and friction points this paper is trying to address.

Time Series
Unsupervised Domain Adaptation
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient Passive Domain Adaptation
Time Series Analysis
Parameter Optimization
๐Ÿ”Ž Similar Papers
No similar papers found.