🤖 AI Summary
Existing objective image/video quality assessment (IQA/VQA) metrics exhibit significant performance degradation on Neural Radiance Fields (NeRF)-synthesized content, highlighting an urgent need for a NeRF-specific evaluation benchmark.
Method: We introduce the first subjective quality database for NeRF-synthesized videos, comprising 48 high-fidelity 360° scene videos rendered by seven state-of-the-art NeRF methods (e.g., Instant-NGP, Plenoxels), with ground-truth Mean Opinion Scores (MOS) obtained via rigorously controlled human subjective experiments.
Contribution/Results: This database establishes the first systematic, NeRF-tailored quality assessment benchmark, filling a critical gap in standardized subjective datasets for neural rendering. It enables rigorous validation and development of objective IQA/VQA metrics, and empirically reveals fundamental limitations of both full-reference and no-reference metrics on NeRF content—thereby underscoring the necessity of perceptually grounded, NeRF-aware quality models.
📝 Abstract
This short paper proposes a new database - NeRF-QA - containing 48 videos synthesized with seven NeRF based methods, together with their perceived quality scores, resulting from subjective assessment tests; for the videos selection, both real and synthetic, 360 degrees scenes were considered. This database will allow to evaluate the suitability, to NeRF based synthesized views, of existing objective quality metrics, and also the development of new quality metrics.