🤖 AI Summary
This paper addresses phase retrieval of rank-one positive semidefinite matrices from rank-one measurements. To overcome key limitations of the Bures–Wasserstein gradient descent method—including nonsmoothness, numerical instability, and a gap between theoretical convergence rates and empirical performance—we propose a novel smoothed Newton framework. Specifically, we construct a differentiable approximation of the objective function via matrix smoothing and design a Newton-type iteration tailored to the Bures–Wasserstein geometry. We establish, for the first time, rigorous global superlinear convergence guarantees for this class of algorithms. Extensive experiments on synthetic data demonstrate that the proposed method achieves stable, rapid, and robust matrix recovery, significantly outperforming existing nonsmooth optimization approaches in both accuracy and efficiency.
📝 Abstract
We consider the phase retrieval problem, which involves recovering a rank-one positive semidefinite matrix from rank-one measurements. A recently proposed algorithm based on Bures-Wasserstein gradient descent (BWGD) exhibits superlinear convergence, but it is unstable, and existing theory can only prove local linear convergence for higher rank matrix recovery. We resolve this gap by revealing that BWGD implements Newton's method with a nonsmooth and nonconvex objective. We develop a smoothing framework that regularizes the objective, enabling a stable method with rigorous superlinear convergence guarantees. Experiments on synthetic data demonstrate this superior stability while maintaining fast convergence.