🤖 AI Summary
Existing Approximate Message Passing (AMP) frameworks are constrained by scalar iterations and separable denoisers, rendering them inadequate for characterizing the high-dimensional asymptotic behavior of multi-source transfer learning estimators under distribution shift.
Method: We propose the Generalized Long AMP (GLAMP) framework—the first AMP variant supporting matrix-valued iterates and nonseparable denoisers—and rigorously establish its State Evolution (SE) theory.
Contribution/Results: GLAMP enables the first exact asymptotic risk characterization of three Lasso-based transfer estimators: Stacked Lasso, model averaging, and two-stage estimation. We prove the convergence of the GLAMP state evolution and validate its finite-sample accuracy via extensive simulations. This work overcomes fundamental modeling limitations of classical AMP, providing the first analytically tractable and computationally feasible high-dimensional statistical inference framework for multi-source transfer learning.
📝 Abstract
Approximate Message Passing (AMP) algorithms enable precise characterization of certain classes of random objects in the high-dimensional limit, and have found widespread applications in fields such as statistics, deep learning, genetics, and communications. However, existing AMP frameworks cannot simultaneously handle matrix-valued iterates and non-separable denoising functions. This limitation prevents them from precisely characterizing estimators that draw information from multiple data sources with distribution shifts. In this work, we introduce Generalized Long Approximate Message Passing (GLAMP), a novel extension of AMP that addresses this limitation. We rigorously prove state evolution for GLAMP. GLAMP significantly broadens the scope of AMP, enabling the analysis of transfer learning estimators that were previously out of reach. We demonstrate the utility of GLAMP by precisely characterizing the risk of three Lasso-based transfer learning estimators: the Stacked Lasso, the Model Averaging Estimator, and the Second Step Estimator. We also demonstrate the remarkable finite sample accuracy of our theory via extensive simulations.