π€ AI Summary
This work addresses the underperformance of existing polarization-based shape-from-polarization methods compared to RGB-only vision foundation modelsβa gap often attributed to domain discrepancies that cast doubt on the utility of polarization cues. To bridge this gap, the authors construct a high-quality dataset of polarimetric renderings of real objects and integrate DINOv3 pretraining priors, polarization-aware data augmentation, and a lightweight normal estimation network. With only 40,000 training scenes, their approach significantly outperforms both state-of-the-art polarization-based methods and RGB-only models in single-shot shape recovery, while reducing the required training data by 33Γ and model parameters by 8Γ. These results reaffirm the potential of polarization as a complementary modality and demonstrate that strong performance can be achieved with substantially less data and model complexity.
π Abstract
We show that, with polarization cues, a lightweight model trained on a small dataset can outperform RGB-only vision foundation models (VFMs) in single-shot object-level surface normal estimation. Shape from polarization (SfP) has long been studied due to the strong physical relationship between polarization and surface geometry. Meanwhile, driven by scaling laws, RGB-only VFMs trained on large datasets have recently achieved impressive performance and surpassed existing SfP methods. This situation raises questions about the necessity of polarization cues, which require specialized hardware and have limited training data. We argue that the weaker performance of prior SfP methods does not come from the polarization modality itself, but from domain gaps. These domain gaps mainly arise from two sources. First, existing synthetic datasets use limited and unrealistic 3D objects, with simple geometry and random texture maps that do not match the underlying shapes. Second, real-world polarization signals are often affected by sensor noise, which is not well modeled during training. To address the first issue, we render a high-quality polarization dataset using 1,954 3D-scanned real-world objects. We further incorporate pretrained DINOv3 priors to improve generalization to unseen objects. To address the second issue, we introduce polarization sensor-aware data augmentation that better reflects real-world conditions. With only 40K training scenes, our method significantly outperforms both state-of-the-art SfP approaches and RGB-only VFMs. Extensive experiments show that polarization cues enable a 33x reduction in training data or an 8x reduction in model parameters, while still achieving better performance than RGB-only counterparts.