Vertex Features for Neural Global Illumination

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional voxel-based feature grids suffer from excessive memory consumption, hindering neural rendering efficiency. To address this, we propose Neural Vertex Features—a novel representation that relocates learnable features from volumetric grids to vertices of an explicit triangle mesh for the first time. Leveraging surface geometry as a structural prior, our method aligns feature distributions with object surfaces, yielding a compact and semantically consistent representation. Our framework jointly optimizes explicit mesh geometry, neural radiance fields, and task-driven geometric constraints—enabling high-fidelity rendering under complex lighting conditions such as global illumination. Experiments demonstrate that our approach reduces memory usage to less than 20% of conventional grid-based methods, significantly lowers inference overhead, and maintains competitive rendering quality. The core contribution lies in redefining feature storage around geometric primitives—using mesh vertices as anchors—to break the memory-accuracy trade-off inherent in prior implicit or volumetric representations.

Technology Category

Application Category

📝 Abstract
Recent research on learnable neural representations has been widely adopted in the field of 3D scene reconstruction and neural rendering applications. However, traditional feature grid representations often suffer from substantial memory footprint, posing a significant bottleneck for modern parallel computing hardware. In this paper, we present neural vertex features, a generalized formulation of learnable representation for neural rendering tasks involving explicit mesh surfaces. Instead of uniformly distributing neural features throughout 3D space, our method stores learnable features directly at mesh vertices, leveraging the underlying geometry as a compact and structured representation for neural processing. This not only optimizes memory efficiency, but also improves feature representation by aligning compactly with the surface using task-specific geometric priors. We validate our neural representation across diverse neural rendering tasks, with a specific emphasis on neural radiosity. Experimental results demonstrate that our method reduces memory consumption to only one-fifth (or even less) of grid-based representations, while maintaining comparable rendering quality and lowering inference overhead.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory footprint in neural rendering
Improving feature representation with mesh vertices
Maintaining rendering quality with lower memory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stores learnable features at mesh vertices
Leverages geometry for compact representation
Reduces memory use while maintaining quality
🔎 Similar Papers
No similar papers found.
Rui Su
Rui Su
University of Sydney
Action DetectionVisual Grounding
H
Honghao Dong
School of Computer Science, Peking University, Beijing, China
H
Haojie Jin
School of Computer Science, Peking University, Beijing, China
Yisong Chen
Yisong Chen
Associate Professor of Computer Science, Peking University
computer vision
G
Guoping Wang
School of Computer Science, Peking University, Beijing, China
S
Sheng Li
School of Computer Science, Peking University, Beijing, China