Real-Time Position-Aware View Synthesis from Single-View Input

📅 2024-12-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Existing view synthesis methods suffer from insufficient real-time performance, hindering low-latency interactive applications. To address this, we propose a single-image–driven neural rendering method that synthesizes novel views conditioned solely on a target pose—without explicit geometric warping. Our key contributions are: (1) a position-aware embedding module, the first to efficiently model complex translational motion within a warping-free framework; and (2) a lightweight dual-branch rendering network integrating multi-layer perceptron–based positional encoding with dual-encoder feature fusion. Evaluated on standard benchmarks, our method achieves >30 FPS inference speed while significantly outperforming state-of-the-art methods in PSNR and SSIM—especially under translational camera motion—demonstrating substantial improvements in reconstruction fidelity. This work establishes an efficient, high-fidelity paradigm for real-time novel view synthesis.

Technology Category

Application Category

📝 Abstract
Recent advancements in view synthesis have significantly enhanced immersive experiences across various computer graphics and multimedia applications, including telepresence, and entertainment. By enabling the generation of new perspectives from a single input view, view synthesis allows users to better perceive and interact with their environment. However, many state-of-the-art methods, while achieving high visual quality, face limitations in real-time performance, which makes them less suitable for live applications where low latency is critical. In this paper, we present a lightweight, position-aware network designed for real-time view synthesis from a single input image and a target camera pose. The proposed framework consists of a Position Aware Embedding, modeled with a multi-layer perceptron, which efficiently maps positional information from the target pose to generate high dimensional feature maps. These feature maps, along with the input image, are fed into a Rendering Network that merges features from dual encoder branches to resolve both high level semantics and low level details, producing a realistic new view of the scene. Experimental results demonstrate that our method achieves superior efficiency and visual quality compared to existing approaches, particularly in handling complex translational movements without explicit geometric operations like warping. This work marks a step toward enabling real-time view synthesis from a single image for live and interactive applications.
Problem

Research questions and friction points this paper is trying to address.

Achieving real-time view synthesis from single-view input
Overcoming latency limitations in live immersive applications
Generating realistic novel views without explicit geometric operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight network enables real-time view synthesis
Position Aware Embedding maps pose to feature maps
Dual encoder branches merge features for realistic views
🔎 Similar Papers
No similar papers found.