Grounding Continuous Representations in Geometry: Equivariant Neural Fields

📅 2024-06-09
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing conditional neural fields (CNFs) suffer from limited performance on fine-grained geometric reasoning tasks—such as classification, segmentation, and reconstruction—due to the lack of explicit modeling of local geometry (e.g., locality, orientation) in their latent spaces. To address this, we propose Equivariant Neural Fields (ENFs), the first CNF framework incorporating *implicit geometric equivariance*. ENFs achieve explicit geometric alignment and equivariant mapping between latent space and continuous signals via geometry-aware cross-attention, coupling neural field decoding with point-cloud–based geometric latent variables. These variables exhibit interpretable rotation/translation covariance, enabling geometric reasoning and local weight sharing. Our method encompasses geometric latent variable modeling, equivariant cross-attention, neural field conditioning, and joint point-cloud–field optimization, efficiently implemented in JAX. Experiments demonstrate that ENFs consistently outperform geometry-agnostic baselines across classification, segmentation, prediction, reconstruction, and generation tasks, significantly improving geometric fidelity of latent representations and generalization efficiency.

Technology Category

Application Category

📝 Abstract
Conditional Neural Fields (CNFs) are increasingly being leveraged as continuous signal representations, by associating each data-sample with a latent variable that conditions a shared backbone Neural Field (NeF) to reconstruct the sample. However, existing CNF architectures face limitations when using this latent downstream in tasks requiring fine-grained geometric reasoning, such as classification and segmentation. We posit that this results from lack of explicit modelling of geometric information (e.g., locality in the signal or the orientation of a feature) in the latent space of CNFs. As such, we propose Equivariant Neural Fields (ENFs), a novel CNF architecture which uses a geometry-informed cross-attention to condition the NeF on a geometric variable--a latent point cloud of features--that enables an equivariant decoding from latent to field. We show that this approach induces a steerability property by which both field and latent are grounded in geometry and amenable to transformation laws: if the field transforms, the latent representation transforms accordingly--and vice versa. Crucially, this equivariance relation ensures that the latent is capable of (1) representing geometric patterns faithfully, allowing for geometric reasoning in latent space, and (2) weight-sharing over similar local patterns, allowing for efficient learning of datasets of fields. We validate these main properties in a range of tasks including classification, segmentation, forecasting, reconstruction and generative modelling, showing clear improvement over baselines with a geometry-free latent space. Code attached to submission https://github.com/Dafidofff/enf-jax. Code for a clean and minimal repo https://github.com/david-knigge/enf-min-jax.
Problem

Research questions and friction points this paper is trying to address.

Enhance geometric reasoning in CNFs
Implement geometry-informed cross-attention
Ensure equivariance between latent and field
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant Neural Fields (ENFs)
geometry-informed cross-attention
latent point cloud features
🔎 Similar Papers
No similar papers found.