π€ AI Summary
This work addresses the challenge of achieving high-fidelity, fine-grained text-guided image editing without model fine-tuning. To this end, we propose VARINβa novel framework that pioneers the integration of noise inversion into Vision Autoregressive (VAR) models. VARIN introduces a position-aware Argmax Inversion (LAI) mechanism, enabling reversible modeling in the Gumbel noise space and facilitating precise, text-driven edits. Leveraging pseudo-inverse functions over discrete token sequences and structural preservation constraints, VARIN performs accurate content replacement and attribute modification without updating any model parameters. Extensive experiments demonstrate that VARIN consistently preserves the original imageβs structure, background, and fine details while significantly improving editing accuracy and robustness across diverse text instructions. Quantitative and qualitative evaluations show that VARIN outperforms state-of-the-art zero-shot editing methods in generation quality and fidelity.
π Abstract
Visual autoregressive models (VAR) have recently emerged as a promising class of generative models, achieving performance comparable to diffusion models in text-to-image generation tasks. While conditional generation has been widely explored, the ability to perform prompt-guided image editing without additional training is equally critical, as it supports numerous practical real-world applications. This paper investigates the text-to-image editing capabilities of VAR by introducing Visual AutoRegressive Inverse Noise (VARIN), the first noise inversion-based editing technique designed explicitly for VAR models. VARIN leverages a novel pseudo-inverse function for argmax sampling, named Location-aware Argmax Inversion (LAI), to generate inverse Gumbel noises. These inverse noises enable precise reconstruction of the source image and facilitate targeted, controllable edits aligned with textual prompts. Extensive experiments demonstrate that VARIN effectively modifies source images according to specified prompts while significantly preserving the original background and structural details, thus validating its efficacy as a practical editing approach.