Foveated Diffusion: Efficient Spatially Adaptive Image and Video Generation

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently generating high-resolution images and videos while maintaining perceptual quality, a task hindered by the quadratic growth in token count with resolution. To this end, the authors propose a foveation-inspired, perception-driven approach that leverages known or estimable gaze points. By employing gaze-guided non-uniform token allocation and hybrid-resolution token construction—combined with a post-training strategy—the method enables existing base models to produce spatially consistent, high-fidelity outputs efficiently. A key component is the proposed foveal masking mechanism, which substantially reduces both token consumption and generation time. User studies demonstrate that the resulting outputs are perceptually indistinguishable from those generated at full resolution.

Technology Category

Application Category

📝 Abstract
Diffusion and flow matching models have unlocked unprecedented capabilities for creative content creation, such as interactive image and streaming video generation. The growing demand for higher resolutions, frame rates, and context lengths, however, makes efficient generation increasingly challenging, as computational complexity grows quadratically with the number of generated tokens. Our work seeks to optimize the efficiency of the generation process in settings where the user's gaze location is known or can be estimated, for example, by using eye tracking. In these settings, we leverage the eccentricity-dependent acuity of human vision: while a user perceives very high-resolution visual information in a small region around their gaze location (the foveal region), the ability to resolve detail quickly degrades in the periphery of the visual field. Our approach starts with a mask modeling the foveated resolution to allocate tokens non-uniformly, assigning higher token density to foveal regions and lower density to peripheral regions. An image or video is generated in a mixed-resolution token setting, yielding results perceptually indistinguishable from full-resolution generation, while drastically reducing the token count and generation time. To this end, we develop a principled mechanism for constructing mixed-resolution tokens directly from high-resolution data, allowing a foveated diffusion model to be post-trained from an existing base model while maintaining content consistency across resolutions. We validate our approach through extensive analysis and a carefully designed user study, demonstrating the efficacy of foveation as a practical and scalable axis for efficient generation.
Problem

Research questions and friction points this paper is trying to address.

foveated generation
efficient diffusion
spatially adaptive
token efficiency
human vision modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Foveated Diffusion
Spatially Adaptive Generation
Mixed-Resolution Tokens
Post-Training
Human Visual Eccentricity
🔎 Similar Papers
No similar papers found.