Arbitrary-Scale 3D Gaussian Super-Resolution

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian splatting (3DGS)-based super-resolution methods are constrained by fixed integer scaling factors, compromising both computational efficiency and arbitrary-scale rendering quality; direct rendering introduces aliasing artifacts, while post-hoc upsampling degrades real-time performance. This paper proposes the first unified framework supporting arbitrary-scale (both integer and non-integer) super-resolution for 3D Gaussian point clouds. Built upon 3DGS, our end-to-end trainable approach integrates scale-aware rendering, generative prior-guided optimization, and a progressive super-resolution strategy. It ensures cross-scale structural consistency and enables high-fidelity real-time rendering with a single model: achieving 85 FPS at 1080p resolution, with a PSNR gain of +6.59 dB over baseline 3DGS. The method effectively suppresses jaggies and significantly enhances multi-scale visual fidelity.

Technology Category

Application Category

📝 Abstract
Existing 3D Gaussian Splatting (3DGS) super-resolution methods typically perform high-resolution (HR) rendering of fixed scale factors, making them impractical for resource-limited scenarios. Directly rendering arbitrary-scale HR views with vanilla 3DGS introduces aliasing artifacts due to the lack of scale-aware rendering ability, while adding a post-processing upsampler for 3DGS complicates the framework and reduces rendering efficiency. To tackle these issues, we build an integrated framework that incorporates scale-aware rendering, generative prior-guided optimization, and progressive super-resolving to enable 3D Gaussian super-resolution of arbitrary scale factors with a single 3D model. Notably, our approach supports both integer and non-integer scale rendering to provide more flexibility. Extensive experiments demonstrate the effectiveness of our model in rendering high-quality arbitrary-scale HR views (6.59 dB PSNR gain over 3DGS) with a single model. It preserves structural consistency with LR views and across different scales, while maintaining real-time rendering speed (85 FPS at 1080p).
Problem

Research questions and friction points this paper is trying to address.

Enabling arbitrary-scale 3D Gaussian super-resolution rendering
Eliminating aliasing artifacts in vanilla 3DGS upscaling
Maintaining real-time performance while avoiding post-processing upsamplers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scale-aware rendering for arbitrary-scale super-resolution
Generative prior-guided optimization for quality enhancement
Progressive super-resolving with single 3D model
🔎 Similar Papers
No similar papers found.
H
Huimin Zeng
Department of ECE, College of Engineering, Northeastern University, Boston, USA
Yue Bai
Yue Bai
Northwestern University, Northeastern University
Multi-modal learningSparse network trainingMask learning
Y
Yun Fu
Department of ECE, College of Engineering, Northeastern University, Boston, USA and Khoury College of Computer Science, Northeastern University, Boston, USA