๐ค AI Summary
In high-dimensional transfer learning, the assumption of global similarity between source and target domains is often invalid when only partial information is shared, leading to suboptimal performance. To address this, we propose a robust partial-information transfer learning method. Our approach jointly models source and target parameters via a novel conditional spike-and-slab heavy-tailed priorโenabling fine-grained characterization of their joint distribution for the first time. We unify variable selection and transfer learning within a single-step Bayesian variational inference framework. Theoretically, we establish covariate-specific transfer gains, prove variable selection consistency, and derive finite-sample bounds on estimation and prediction errors. Extensive experiments on multiple benchmark and real-world datasets demonstrate that our method significantly outperforms state-of-the-art transfer learning approaches, while maintaining scalability and practical applicability.
๐ Abstract
The popularity of transfer learning stems from the fact that it can borrow information from useful auxiliary datasets. Existing statistical transfer learning methods usually adopt a global similarity measure between the source data and the target data, which may lead to inefficiency when only partial information is shared. In this paper, we propose a novel Bayesian transfer learning method named ``CONCERT'' to allow robust partial information transfer for high-dimensional data analysis. A conditional spike-and-slab prior is introduced in the joint distribution of target and source parameters for information transfer. By incorporating covariate-specific priors, we can characterize partial similarities and integrate source information collaboratively to improve the performance on the target. In contrast to existing work, the CONCERT is a one-step procedure, which achieves variable selection and information transfer simultaneously. We establish variable selection consistency, as well as estimation and prediction error bounds for CONCERT. Our theory demonstrates the covariate-specific benefit of transfer learning. To ensure that our algorithm is scalable, we adopt the variational Bayes framework to facilitate implementation. Extensive experiments and two real data applications showcase the validity and advantage of CONCERT over existing cutting-edge transfer learning methods.