R3eVision: A Survey on Robust Rendering, Restoration, and Enhancement for 3D Low-Level Vision

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural rendering methods (e.g., NeRF, 3D Gaussian Splatting) suffer severe performance degradation under real-world image degradations—including noise, blur, low resolution, and adverse weather. To address this, we propose “Degradation-Aware 3D Low-Level Vision” (3D LLV), a novel paradigm for robust 3D reconstruction from degraded inputs. We formally define the degradation-aware rendering problem and identify two fundamental challenges: preserving spatiotemporal consistency and mitigating ill-posed optimization. Our method unifies neural rendering with 2D low-level restoration via a cohesive framework incorporating multi-scale feature alignment, implicit-explicit joint modeling, and cross-view degradation modeling. We further survey over 100 state-of-the-art works and consolidate dedicated datasets and evaluation protocols. The proposed approach enables high-fidelity, robust 3D reconstruction and perception under degradation—providing foundational support for autonomous driving, AR/VR, and robotics applications.

Technology Category

Application Category

📝 Abstract
Neural rendering methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have achieved significant progress in photorealistic 3D scene reconstruction and novel view synthesis. However, most existing models assume clean and high-resolution (HR) multi-view inputs, which limits their robustness under real-world degradations such as noise, blur, low-resolution (LR), and weather-induced artifacts. To address these limitations, the emerging field of 3D Low-Level Vision (3D LLV) extends classical 2D Low-Level Vision tasks including super-resolution (SR), deblurring, weather degradation removal, restoration, and enhancement into the 3D spatial domain. This survey, referred to as R extsuperscript{3}eVision, provides a comprehensive overview of robust rendering, restoration, and enhancement for 3D LLV by formalizing the degradation-aware rendering problem and identifying key challenges related to spatio-temporal consistency and ill-posed optimization. Recent methods that integrate LLV into neural rendering frameworks are categorized to illustrate how they enable high-fidelity 3D reconstruction under adverse conditions. Application domains such as autonomous driving, AR/VR, and robotics are also discussed, where reliable 3D perception from degraded inputs is critical. By reviewing representative methods, datasets, and evaluation protocols, this work positions 3D LLV as a fundamental direction for robust 3D content generation and scene-level reconstruction in real-world environments.
Problem

Research questions and friction points this paper is trying to address.

Addressing 3D scene reconstruction under real-world degradations like noise and blur
Extending 2D low-level vision tasks to 3D spatial domain
Enhancing robust 3D content generation for real-world applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends 2D Low-Level Vision to 3D
Integrates LLV into neural rendering
Addresses real-world degradation challenges
🔎 Similar Papers
No similar papers found.