VENI: Variational Encoder for Natural Illumination

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing inverse rendering methods, which neglect the spherical structure and rotational equivariance of natural illumination and produce latent spaces with poor geometric organization. We propose the first rotationally equivariant variational autoencoder that directly models natural lighting on the sphere without relying on 2D projections. By introducing SO(2)-equivariant fully connected layers—built upon an extension of Vector Neurons—and integrating a VN-ViT encoder with a rotationally equivariant conditional neural field decoder, our approach reduces the equivariance constraint from SO(3) to SO(2) while yielding a well-structured latent space with smooth interpolations. Experiments demonstrate that our method outperforms standard Vector Neurons in equivariant modeling tasks, significantly improving the geometric consistency and generative quality of illumination representations.

Technology Category

Application Category

📝 Abstract
Inverse rendering is an ill-posed problem, but priors like illumination priors, can simplify it. Existing work either disregards the spherical and rotation-equivariant nature of illumination environments or does not provide a well-behaved latent space. We propose a rotation-equivariant variational autoencoder that models natural illumination on the sphere without relying on 2D projections. To preserve the SO(2)-equivariance of environment maps, we use a novel Vector Neuron Vision Transformer (VN-ViT) as encoder and a rotation-equivariant conditional neural field as decoder. In the encoder, we reduce the equivariance from SO(3) to SO(2) using a novel SO(2)-equivariant fully connected layer, an extension of Vector Neurons. We show that our SO(2)-equivariant fully connected layer outperforms standard Vector Neurons when used in our SO(2)-equivariant model. Compared to previous methods, our variational autoencoder enables smoother interpolation in latent space and offers a more well-behaved latent space.
Problem

Research questions and friction points this paper is trying to address.

inverse rendering
natural illumination
rotation-equivariance
latent space
spherical representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

rotation-equivariant
variational autoencoder
natural illumination
Vector Neuron Vision Transformer
SO(2)-equivariance
🔎 Similar Papers
No similar papers found.