🤖 AI Summary
This paper addresses the label-free gradient inversion problem under Single-batch Average Gradients (SAG), aiming to reconstruct multiple original images with high fidelity from their batch-averaged gradients. To tackle the severe entanglement of gradient signals within a batch, we propose three core innovations: (1) a momentum-enhanced adaptive gradient correction mechanism to mitigate noise accumulation; (2) a momentum-hybrid optimization strategy integrating random subset probing with full-batch loss; and (3) a closed-form combinatorial rescaling method that explicitly models gradient scaling relationships to tighten convergence bounds. Our approach enables joint multi-image reconstruction and significantly outperforms existing SAG inversion methods—especially in large-batch settings—achieving superior reconstruction quality and robustness. Crucially, it incurs computational overhead comparable to standard optimizers, requires no labels, auxiliary models, or external priors, and operates solely from gradient information.
📝 Abstract
We study gradient inversion in the challenging single round averaged gradient SAG regime where per sample cues are entangled within a single batch mean gradient. We introduce MAGIA a momentum based adaptive correction on gradient inversion attack a novel label inference free framework that senses latent per image signals by probing random data subsets. MAGIA objective integrates two core innovations 1 a closed form combinatorial rescaling that creates a provably tighter optimization bound and 2 a momentum based mixing of whole batch and subset losses to ensure reconstruction robustness. Extensive experiments demonstrate that MAGIA significantly outperforms advanced methods achieving high fidelity multi image reconstruction in large batch scenarios where prior works fail. This is all accomplished with a computational footprint comparable to standard solvers and without requiring any auxiliary information.