π€ AI Summary
This work addresses the long-standing open problem of establishing global convergence for the multiplicative update iteration \( v^{(k+1)} = \mathrm{diag}((D_{v^{(k)}}^{1/2} M D_{v^{(k)}}^{1/2})^{1/2}) \) arising in private machine learning with Hadamard-product-structured regularized nuclear norm optimization. By integrating tools from matrix analysis and fixed-point theory, the paper provides the first rigorous proof that this iteration monotonically converges to the unique global optimum, thereby closing a critical theoretical gap. Furthermore, the study leverages Gemini 3 to assist in mathematical derivations, developing and distilling an effective humanβAI collaborative proving strategy that offers a novel paradigm for AI-augmented formal mathematical research.
π Abstract
We analyze a fixed-point iteration $v \leftarrow Ο(v)$ arising in the optimization of a regularized nuclear norm objective involving the Hadamard product structure, posed in~\cite{denisov} in the context of an optimization problem over the space of algorithms in private machine learning. We prove that the iteration $v^{(k+1)} = \text{diag}((D_{v^{(k)}}^{1/2} M D_{v^{(k)}}^{1/2})^{1/2})$ converges monotonically to the unique global optimizer of the potential function $J(v) = 2 \text{Tr}((D_v^{1/2} M D_v^{1/2})^{1/2}) - \sum v_i$, closing a problem left open there.
The bulk of this proof was provided by Gemini 3, subject to some corrections and interventions. Gemini 3 also sketched the initial version of this note. Thus, it represents as much a commentary on the practical use of AI in mathematics as it represents the closure of a small gap in the literature. As such, we include a small narrative description of the prompting process, and some resulting principles for working with AI to prove mathematics.