🤖 AI Summary
In large-scale distributed dynamic computing, each iteration accesses only partial data, causing information decay and estimation inaccuracy under linear operators. To address this, we propose an Approximate Message Passing (AMP) framework integrating autoregressive memory mechanisms with orthogonal projection. Our method introduces the first general AMP theoretical model applicable to arbitrary linear operators; designs a memory-augmented iterative update scheme and a dedicated orthogonal projection algorithm compatible with nonseparable denoisers; and leverages state evolution and Gaussian process limit theory to derive high-dimensional asymptotically exact statistical characterizations. Under rank-one sparse signals, Gaussian noise, and row-wise data updates, the algorithm achieves asymptotically optimal recovery. Theoretical predictions align closely with numerical experiments, validating both analytical tractability and robustness of the iterative process.
📝 Abstract
This paper introduces a framework for approximate message passing (AMP) in dynamic settings where the data at each iteration is passed through a linear operator. This framework is motivated in part by applications in large-scale, distributed computing where only a subset of the data is available at each iteration. An autoregressive memory term is used to mitigate information loss across iterations and a specialized algorithm, called projection AMP, is designed for the case where each linear operator is an orthogonal projection. Precise theoretical guarantees are provided for a class of Gaussian matrices and non-separable denoising functions. Specifically, it is shown that the iterates can be well-approximated in the high-dimensional limit by a Gaussian process whose second-order statistics are defined recursively via state evolution. These results are applied to the problem of estimating a rank-one spike corrupted by additive Gaussian noise using partial row updates, and the theory is validated by numerical simulations.