NeAR: Coupled Neural Asset-Renderer Stack

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current neural asset creation and neural rendering are largely decoupled, limiting the fidelity, consistency, and efficiency of end-to-end learnable graphics pipelines. To address this, we propose NeAR—the first unified architecture that jointly optimizes neural asset representations and neural renderers. Our approach introduces (1) a lighting-normalized 3D implicit representation that explicitly disentangles geometry, material, and illumination; and (2) an explicit lighting-aware renderer integrating Trellis-structured 3D latent variables, correction flow modeling, HDR environment map encoding, and view embedding. Evaluated on single-image reconstruction, novel-view synthesis, and relighting tasks, NeAR achieves state-of-the-art quantitative performance and superior visual quality—demonstrating the efficacy of co-designing neural assets and rendering.

Technology Category

Application Category

📝 Abstract
Neural asset authoring and neural rendering have emerged as fundamentally disjoint threads: one generates digital assets using neural networks for traditional graphics pipelines, while the other develops neural renderers that map conventional assets to images. However, the potential of jointly designing the asset representation and renderer remains largely unexplored. We argue that coupling them can unlock an end-to-end learnable graphics stack with benefits in fidelity, consistency, and efficiency. In this paper, we explore this possibility with NeAR: a Coupled Neural Asset-Renderer Stack. On the asset side, we build on Trellis-style Structured 3D Latents and introduce a lighting-homogenized neural asset: from a casually lit input, a rectified-flow backbone predicts a Lighting-Homogenized SLAT that encodes geometry and intrinsic material cues in a compact, view-agnostic latent. On the renderer side, we design a lighting-aware neural renderer that uses this neural asset, along with explicit view embeddings and HDR environment maps, to achieve real-time, relightable rendering. We validate NeAR on four tasks: (1) G-buffer-based forward rendering, (2) random-lit single-image reconstruction, (3) unknown-lit single-image relighting, and (4) novel-view relighting. Our coupled stack surpasses state-of-the-art baselines in both quantitative metrics and perceptual quality. We hope this coupled asset-renderer perspective inspires future graphics stacks that view neural assets and renderers as co-designed components instead of independent entities.
Problem

Research questions and friction points this paper is trying to address.

Bridging neural asset creation and neural rendering into unified pipeline
Developing lighting-homogenized neural assets from casually lit inputs
Creating real-time relightable rendering with view and lighting control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coupling neural asset representation with neural renderer
Using lighting-homogenized latent for geometry encoding
Achieving real-time relightable rendering with environment maps
🔎 Similar Papers
No similar papers found.