🤖 AI Summary
Neural rendering methods (e.g., NeRF, 3D Gaussian Splatting) suffer severe performance degradation under real-world image degradations—including noise, blur, low resolution, and adverse weather. To address this, we propose “Degradation-Aware 3D Low-Level Vision” (3D LLV), a novel paradigm for robust 3D reconstruction from degraded inputs. We formally define the degradation-aware rendering problem and identify two fundamental challenges: preserving spatiotemporal consistency and mitigating ill-posed optimization. Our method unifies neural rendering with 2D low-level restoration via a cohesive framework incorporating multi-scale feature alignment, implicit-explicit joint modeling, and cross-view degradation modeling. We further survey over 100 state-of-the-art works and consolidate dedicated datasets and evaluation protocols. The proposed approach enables high-fidelity, robust 3D reconstruction and perception under degradation—providing foundational support for autonomous driving, AR/VR, and robotics applications.
📝 Abstract
Neural rendering methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have achieved significant progress in photorealistic 3D scene reconstruction and novel view synthesis. However, most existing models assume clean and high-resolution (HR) multi-view inputs, which limits their robustness under real-world degradations such as noise, blur, low-resolution (LR), and weather-induced artifacts. To address these limitations, the emerging field of 3D Low-Level Vision (3D LLV) extends classical 2D Low-Level Vision tasks including super-resolution (SR), deblurring, weather degradation removal, restoration, and enhancement into the 3D spatial domain. This survey, referred to as R extsuperscript{3}eVision, provides a comprehensive overview of robust rendering, restoration, and enhancement for 3D LLV by formalizing the degradation-aware rendering problem and identifying key challenges related to spatio-temporal consistency and ill-posed optimization. Recent methods that integrate LLV into neural rendering frameworks are categorized to illustrate how they enable high-fidelity 3D reconstruction under adverse conditions. Application domains such as autonomous driving, AR/VR, and robotics are also discussed, where reliable 3D perception from degraded inputs is critical. By reviewing representative methods, datasets, and evaluation protocols, this work positions 3D LLV as a fundamental direction for robust 3D content generation and scene-level reconstruction in real-world environments.