A Real-world Display Inverse Rendering Dataset

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current inverse rendering research is hindered by the absence of large-scale, real-world, ground-truth-annotated display-camera system datasets. To address this, we introduce the first real-scene inverse rendering dataset built upon an LCD display and a dual-polarization camera, enabling image synthesis and noise simulation under arbitrary display patterns. Leveraging programmable LCD illumination and co-calibrated stereo polarimetric imaging, our pipeline achieves robust separation of diffuse and specular reflectance, supports OLAT (optical light-field acquisition technique) capture, and provides high-fidelity ground truth for geometry and reflectance properties. This dataset bridges a critical gap in real-world data for geometry and material reconstruction in display-camera systems. Furthermore, we propose an effective baseline method grounded in this dataset, which surpasses state-of-the-art performance on both photometric stereo and inverse rendering benchmarks.

Technology Category

Application Category

📝 Abstract
Inverse rendering aims to reconstruct geometry and reflectance from captured images. Display-camera imaging systems offer unique advantages for this task: each pixel can easily function as a programmable point light source, and the polarized light emitted by LCD displays facilitates diffuse-specular separation. Despite these benefits, there is currently no public real-world dataset captured using display-camera systems, unlike other setups such as light stages. This absence hinders the development and evaluation of display-based inverse rendering methods. In this paper, we introduce the first real-world dataset for display-based inverse rendering. To achieve this, we construct and calibrate an imaging system comprising an LCD display and stereo polarization cameras. We then capture a diverse set of objects with diverse geometry and reflectance under one-light-at-a-time (OLAT) display patterns. We also provide high-quality ground-truth geometry. Our dataset enables the synthesis of captured images under arbitrary display patterns and different noise levels. Using this dataset, we evaluate the performance of existing photometric stereo and inverse rendering methods, and provide a simple, yet effective baseline for display inverse rendering, outperforming state-of-the-art inverse rendering methods. Code and dataset are available on our project page at https://michaelcsj.github.io/DIR/
Problem

Research questions and friction points this paper is trying to address.

Lack of real-world dataset for display-camera inverse rendering systems
Need for calibrated display-camera setup to capture geometry and reflectance
Absence of benchmark for evaluating display-based inverse rendering methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

LCD display and stereo polarization cameras system
One-light-at-a-time display patterns capture
Synthesize images under arbitrary display patterns
🔎 Similar Papers
No similar papers found.