Equivariant Image Modeling

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative models face inherent optimization conflicts when decomposing high-dimensional data distributions into subtasks, making it challenging for existing approaches to simultaneously achieve efficiency, scalability, and generalization. This paper introduces the first task-aligned translation-equivariant image generation framework. It employs column-wise tokenization to enhance horizontal translational symmetry, incorporates windowed causal attention to preserve local contextual consistency, and establishes a joint training paradigm combining class-conditional diffusion and autoregressive modeling. Evaluated on 256×256 ImageNet, our method matches state-of-the-art autoregressive models in performance while reducing computational overhead. It further demonstrates significantly improved zero-shot generalization and enables synthesis of ultra-long images—addressing key limitations in both efficiency and expressivity of prior generative architectures.

Technology Category

Application Category

📝 Abstract
Current generative models, such as autoregressive and diffusion approaches, decompose high-dimensional data distribution learning into a series of simpler subtasks. However, inherent conflicts arise during the joint optimization of these subtasks, and existing solutions fail to resolve such conflicts without sacrificing efficiency or scalability. We propose a novel equivariant image modeling framework that inherently aligns optimization targets across subtasks by leveraging the translation invariance of natural visual signals. Our method introduces (1) column-wise tokenization which enhances translational symmetry along the horizontal axis, and (2) windowed causal attention which enforces consistent contextual relationships across positions. Evaluated on class-conditioned ImageNet generation at 256x256 resolution, our approach achieves performance comparable to state-of-the-art AR models while using fewer computational resources. Systematic analysis demonstrates that enhanced equivariance reduces inter-task conflicts, significantly improving zero-shot generalization and enabling ultra-long image synthesis. This work establishes the first framework for task-aligned decomposition in generative modeling, offering insights into efficient parameter sharing and conflict-free optimization. The code and models are publicly available at https://github.com/drx-code/EquivariantModeling.
Problem

Research questions and friction points this paper is trying to address.

Resolves conflicts in joint optimization of generative subtasks
Enhances translational symmetry in image modeling
Improves zero-shot generalization and long image synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant image modeling framework
Column-wise tokenization enhances symmetry
Windowed causal attention ensures consistency
🔎 Similar Papers
No similar papers found.