🤖 AI Summary
Existing methods struggle to jointly model 3D geometry and view-dependent appearance effects—such as specular highlights and Fresnel reflections—within a unified framework. This work proposes a novel implicit 3D representation that, for the first time, encodes stochastically subsampled surface light fields into compact latent vectors, enabling a unified latent space that co-represents both geometry and view-dependent appearance. Leveraging a single input image, the method employs an implicit flow-matching model to generate 3D objects with consistent illumination and material properties. The approach outperforms prior work in both visual realism and input fidelity, accurately reproducing complex view-dependent effects under challenging lighting conditions.
📝 Abstract
We propose a 3D latent representation that jointly models object geometry and view-dependent appearance. Most prior works focus on either reconstructing 3D geometry or predicting view-independent diffuse appearance, and thus struggle to capture realistic view-dependent effects. Our approach leverages that RGB-depth images provide samples of a surface light field. By encoding random subsamples of this surface light field into a compact set of latent vectors, our model learns to represent both geometry and appearance within a unified 3D latent space. This representation reproduces view-dependent effects such as specular highlights and Fresnel reflections under complex lighting. We further train a latent flow matching model on this representation to learn its distribution conditioned on a single input image, enabling the generation of 3D objects with appearances consistent with the lighting and materials in the input. Experiments show that our approach achieves higher visual quality and better input fidelity than existing methods.