🤖 AI Summary
Image-based 3D reconstruction of glossy objects remains challenging due to the entanglement of diffuse and specular reflectance components in RGB images, coupled with strong interdependence between lighting and material properties.
Method: This paper proposes a low-cost, easily deployable polarimetric neural implicit reconstruction framework. It acquires multi-view polarization images using a standard RGB camera equipped with a linear polarizer—requiring no precise calibration or angular measurement. For the first time, it integrates polarization physics—including Stokes vector representation, polarimetric BRDF, and surface polarization state—into a neural implicit field, enabling end-to-end joint optimization.
Contribution/Results: By incorporating polarization-aware rendering constraints and differentiable rendering losses, the method significantly improves 3D reconstruction accuracy and novel-view synthesis quality for specular objects on both public benchmarks and real-world scenes, achieving state-of-the-art performance while maintaining practicality and scalability.
📝 Abstract
The challenge of image-based 3D reconstruction for glossy objects lies in separating diffuse and specular components on glossy surfaces from captured images, a task complicated by the ambiguity in discerning lighting conditions and material properties using RGB data alone. While state-of-the-art methods rely on tailored and/or high-end equipment for data acquisition, which can be cumbersome and time-consuming, this work introduces a scalable polarization-aided approach that employs cost-effective acquisition tools. By attaching a linear polarizer to readily available RGB cameras, multi-view polarization images can be captured without the need for advance calibration or precise measurements of the polarizer angle, substantially reducing system construction costs. The proposed approach represents polarimetric BRDF, Stokes vectors, and polarization states of object surfaces as neural implicit fields. These fields, combined with the polarizer angle, are retrieved by optimizing the rendering loss of input polarized images. By leveraging fundamental physical principles for the implicit representation of polarization rendering, our method demonstrates superiority over existing techniques through experiments in public datasets and real captured images on both reconstruction and novel view synthesis.