StyleNAT: Giving Each Head a New Perspective

πŸ“… 2022-11-10
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 23
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high computational cost and degraded structural coherence in Transformer-based visual generative modelsβ€”caused by quadratic attention complexity and limited receptive fieldsβ€”this paper introduces StyleNAT, a novel framework featuring a head-level disentangled partitioned Neighborhood Attention (NA) mechanism. This design enables each attention head to adaptively capture both fine-grained local details and global semantic context. Moreover, StyleNAT is the first to deeply integrate neighborhood attention into the stylized image generation paradigm. Quantitatively, on FFHQ-256, StyleNAT achieves a state-of-the-art (SOTA) FID of 2.046β€”improving upon StyleGAN-XL by 6.4%β€”while reducing model parameters by 28% and increasing sampling throughput by 56%. On FFHQ-1024, it attains an FID of 4.174, establishing a new SOTA among Transformer-based generative models.
πŸ“ Abstract
Image generation has been a long sought-after but challenging task, and performing the generation task in an efficient manner is similarly difficult. Often researchers attempt to create a"one size fits all"generator, where there are few differences in the parameter space for drastically different datasets. Herein, we present a new transformer-based framework, dubbed StyleNAT, targeting high-quality image generation with superior efficiency and flexibility. At the core of our model, is a carefully designed framework that partitions attention heads to capture local and global information, which is achieved through using Neighborhood Attention (NA). With different heads able to pay attention to varying receptive fields, the model is able to better combine this information, and adapt, in a highly flexible manner, to the data at hand. StyleNAT attains a new SOTA FID score on FFHQ-256 with 2.046, beating prior arts with convolutional models such as StyleGAN-XL and transformers such as HIT and StyleSwin, and a new transformer SOTA on FFHQ-1024 with an FID score of 4.174. These results show a 6.4% improvement on FFHQ-256 scores when compared to StyleGAN-XL with a 28% reduction in the number of parameters and 56% improvement in sampling throughput. Code and models will be open-sourced at https://github.com/SHI-Labs/StyleNAT.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational burden in vision transformers
Maintain global and local coherence in attention
Improve efficiency and quality in image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variadic attention heads for multiple receptive fields
Integration of Neighborhood Attention into StyleGAN
Improved efficiency with fewer parameters and higher throughput
πŸ”Ž Similar Papers
No similar papers found.