🤖 AI Summary
This work introduces a cross-sample collaborative generation paradigm, the first to enable joint denoising of multiple images during diffusion model inference—departing from the conventional assumption of independent sampling. Methodologically, it reformulates the self-attention mechanism in standard Diffusion Transformers to support patch-level inter-image interaction and proposes a group-wise joint sampling strategy. Key contributions include: (1) establishing an interpretable metric for cross-sample attention strength; (2) discovering a positive scaling law between group size and generation quality; (3) achieving a 32.2% reduction in FID on ImageNet-256×256, significantly outperforming independent-sampling baselines; and (4) empirically demonstrating a strong negative correlation between cross-sample attention strength and FID, thereby revealing the mechanistic basis for effective collaborative denoising.
📝 Abstract
In this work, we explore an untapped signal in diffusion model inference. While all previous methods generate images independently at inference, we instead ask if samples can be generated collaboratively. We propose Group Diffusion, unlocking the attention mechanism to be shared across images, rather than limited to just the patches within an image. This enables images to be jointly denoised at inference time, learning both intra and inter-image correspondence. We observe a clear scaling effect - larger group sizes yield stronger cross-sample attention and better generation quality. Furthermore, we introduce a qualitative measure to capture this behavior and show that its strength closely correlates with FID. Built on standard diffusion transformers, our GroupDiff achieves up to 32.2% FID improvement on ImageNet-256x256. Our work reveals cross-sample inference as an effective, previously unexplored mechanism for generative modeling.