🤖 AI Summary
This work addresses the challenges of view inconsistency and lighting sensitivity in zero-shot super-resolution of physically based rendering (PBR) textures. The proposed method requires no fine-tuning or additional training data; instead, it leverages a pre-trained natural-image super-resolution model (e.g., Real-ESRGAN) as a structural prior and integrates it into a differentiable rendering-based iterative back-projection optimization framework. Crucially, it introduces two novel constraints: multi-view 2D consistency to enforce geometric coherence across viewpoints, and PBR-material-space identity regularization to explicitly encode physical rendering invariance during optimization. Experiments demonstrate that the method significantly outperforms both direct application of SR models and conventional texture optimization approaches on both manually designed and AI-generated meshes. It enables high-fidelity relighting and achieves state-of-the-art performance in both quantitative PBR metrics (e.g., BRDF fidelity) and qualitative rendering quality.
📝 Abstract
We present PBR-SR, a novel method for physically based rendering (PBR) texture super resolution (SR). It outputs high-resolution, high-quality PBR textures from low-resolution (LR) PBR input in a zero-shot manner. PBR-SR leverages an off-the-shelf super-resolution model trained on natural images, and iteratively minimizes the deviations between super-resolution priors and differentiable renderings. These enhancements are then back-projected into the PBR map space in a differentiable manner to produce refined, high-resolution textures. To mitigate view inconsistencies and lighting sensitivity, which is common in view-based super-resolution, our method applies 2D prior constraints across multi-view renderings, iteratively refining the shared, upscaled textures. In parallel, we incorporate identity constraints directly in the PBR texture domain to ensure the upscaled textures remain faithful to the LR input. PBR-SR operates without any additional training or data requirements, relying entirely on pretrained image priors. We demonstrate that our approach produces high-fidelity PBR textures for both artist-designed and AI-generated meshes, outperforming both direct SR models application and prior texture optimization methods. Our results show high-quality outputs in both PBR and rendering evaluations, supporting advanced applications such as relighting.