GLAMP: An Approximate Message Passing Framework for Transfer Learning with Applications to Lasso-based Estimators

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Approximate Message Passing (AMP) frameworks are constrained by scalar iterations and separable denoisers, rendering them inadequate for characterizing the high-dimensional asymptotic behavior of multi-source transfer learning estimators under distribution shift. Method: We propose the Generalized Long AMP (GLAMP) framework—the first AMP variant supporting matrix-valued iterates and nonseparable denoisers—and rigorously establish its State Evolution (SE) theory. Contribution/Results: GLAMP enables the first exact asymptotic risk characterization of three Lasso-based transfer estimators: Stacked Lasso, model averaging, and two-stage estimation. We prove the convergence of the GLAMP state evolution and validate its finite-sample accuracy via extensive simulations. This work overcomes fundamental modeling limitations of classical AMP, providing the first analytically tractable and computationally feasible high-dimensional statistical inference framework for multi-source transfer learning.

Technology Category

Application Category

📝 Abstract
Approximate Message Passing (AMP) algorithms enable precise characterization of certain classes of random objects in the high-dimensional limit, and have found widespread applications in fields such as statistics, deep learning, genetics, and communications. However, existing AMP frameworks cannot simultaneously handle matrix-valued iterates and non-separable denoising functions. This limitation prevents them from precisely characterizing estimators that draw information from multiple data sources with distribution shifts. In this work, we introduce Generalized Long Approximate Message Passing (GLAMP), a novel extension of AMP that addresses this limitation. We rigorously prove state evolution for GLAMP. GLAMP significantly broadens the scope of AMP, enabling the analysis of transfer learning estimators that were previously out of reach. We demonstrate the utility of GLAMP by precisely characterizing the risk of three Lasso-based transfer learning estimators: the Stacked Lasso, the Model Averaging Estimator, and the Second Step Estimator. We also demonstrate the remarkable finite sample accuracy of our theory via extensive simulations.
Problem

Research questions and friction points this paper is trying to address.

Extends AMP to handle matrix iterates and non-separable functions
Enables precise analysis of transfer learning with distribution shifts
Characterizes risk of Lasso-based estimators in transfer learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends AMP to handle matrix-valued iterates
Supports non-separable denoising functions
Enables precise risk characterization for transfer learning
L
Longlin Wang
Dept. of Statistics, Harvard University
Yanke Song
Yanke Song
PhD Student, Department of Statistics, Harvard University
statisticsmachine learning
K
Kuanhao Jiang
Dept. of Statistics, Harvard University
P
Pragya Sur
Dept. of Statistics, Harvard University