🤖 AI Summary
This work addresses the hyperparameter selection challenge in transfer learning for high-dimensional sparse regression, focusing on controlling information transfer strength in Lasso-based methods such as Trans-Lasso. Methodologically, we conduct the first sharp asymptotic analysis of such transfer learning via the replica method. Our theoretical analysis reveals an intrinsic simplicity in transfer behavior: omitting one of the two types of transferred information incurs negligible degradation in generalization performance—effectively reducing the critical hyperparameter space from two- to one-dimensional. This insight substantially simplifies hyperparameter tuning. Empirical evaluation on semi-synthetic datasets derived from IMDb and MNIST demonstrates that our strategy achieves near-optimal predictive performance while significantly reducing hyperparameter search overhead. The results provide both interpretable theoretical guidance and a practical, deployable framework for high-dimensional transfer learning.
📝 Abstract
Transfer learning techniques aim to leverage information from multiple related datasets to enhance prediction quality against a target dataset. Such methods have been adopted in the context of high-dimensional sparse regression, and some Lasso-based algorithms have been invented: Trans-Lasso and Pretraining Lasso are such examples. These algorithms require the statistician to select hyperparameters that control the extent and type of information transfer from related datasets. However, selection strategies for these hyperparameters, as well as the impact of these choices on the algorithm's performance, have been largely unexplored. To address this, we conduct a thorough, precise study of the algorithm in a high-dimensional setting via an asymptotic analysis using the replica method. Our approach reveals a surprisingly simple behavior of the algorithm: Ignoring one of the two types of information transferred to the fine-tuning stage has little effect on generalization performance, implying that efforts for hyperparameter selection can be significantly reduced. Our theoretical findings are also empirically supported by applications on real-world and semi-artificial datasets using the IMDb and MNIST datasets, respectively.