🤖 AI Summary
Real-time novel view synthesis of fuzzy geometric structures (e.g., hair) suffers from a fundamental trade-off between modeling fidelity and rendering efficiency.
Method: We propose a hierarchical translucent mesh representation: a learned-spacing signed distance function (SDF) shell stacking structure enabling bounded sampling, rasterization-accelerated spatial indexing, and order-independent translucent rendering; augmented by differentiable mesh baking and UV-texture fitting to jointly model both surface and volumetric fuzziness.
Contribution/Results: Our method achieves >30 FPS real-time performance on commodity laptops and smartphones—significantly outperforming conventional volume rendering and point-based approaches. It establishes a new Pareto-optimal balance between reconstruction quality and computational efficiency, and—crucially—enables the first real-time, surface-level rendering of fuzzy objects on low-power devices.
📝 Abstract
High-quality view synthesis relies on volume rendering, splatting, or surface rendering. While surface rendering is typically the fastest, it struggles to accurately model fuzzy geometry like hair. In turn, alpha-blending techniques excel at representing fuzzy materials but require an unbounded number of samples per ray (P1). Further overheads are induced by empty space skipping in volume rendering (P2) and sorting input primitives in splatting (P3). We present a novel representation for real-time view synthesis where the (P1) number of sampling locations is small and bounded, (P2) sampling locations are efficiently found via rasterization, and (P3) rendering is sorting-free. We achieve this by representing objects as semi-transparent multi-layer meshes rendered in a fixed order. First, we model surface layers as signed distance function (SDF) shells with optimal spacing learned during training. Then, we bake them as meshes and fit UV textures. Unlike single-surface methods, our multi-layer representation effectively models fuzzy objects. In contrast to volume and splatting-based methods, our approach enables real-time rendering on low-power laptops and smartphones.