🤖 AI Summary
Existing real-world image super-resolution (Real-ISR) methods suffer from reconstruction artifacts and structural distortions under complex or severe degradations, primarily due to diffusion models’ limited perceptual and structural understanding of low-quality inputs. To address this, we propose PURE, an end-to-end autoregressive multimodal reconstruction framework. PURE is the first to adapt pretrained autoregressive multimodal large models (e.g., Lumina-mGPT) to Real-ISR. It introduces an image-token entropy–based top-k sampling strategy to explicitly enforce local structural consistency, and integrates instruction tuning, cross-modal alignment, and semantic description–guided autoregressive token generation. Evaluated on multi-object complex scenes, PURE significantly improves content fidelity and texture realism, outperforming current state-of-the-art methods. The code and models are publicly released.
📝 Abstract
By leveraging the generative priors from pre-trained text-to-image diffusion models, significant progress has been made in real-world image super-resolution (Real-ISR). However, these methods tend to generate inaccurate and unnatural reconstructions in complex and/or heavily degraded scenes, primarily due to their limited perception and understanding capability of the input low-quality image. To address these limitations, we propose, for the first time to our knowledge, to adapt the pre-trained autoregressive multimodal model such as Lumina-mGPT into a robust Real-ISR model, namely PURE, which Perceives and Understands the input low-quality image, then REstores its high-quality counterpart. Specifically, we implement instruction tuning on Lumina-mGPT to perceive the image degradation level and the relationships between previously generated image tokens and the next token, understand the image content by generating image semantic descriptions, and consequently restore the image by generating high-quality image tokens autoregressively with the collected information. In addition, we reveal that the image token entropy reflects the image structure and present a entropy-based Top-k sampling strategy to optimize the local structure of the image during inference. Experimental results demonstrate that PURE preserves image content while generating realistic details, especially in complex scenes with multiple objects, showcasing the potential of autoregressive multimodal generative models for robust Real-ISR. The model and code will be available at https://github.com/nonwhy/PURE.