Efficient-LVSM: Faster, Cheaper, and Better Large View Synthesis Model via Decoupled Co-Refinement Attention

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational complexity and rigid parameter sharing inherent in existing full self-attention-based novel view synthesis methods. To overcome these limitations, we propose a dual-stream Transformer architecture that decouples the processing of input and target views through a collaborative refinement mechanism: self-attention is applied to the input views, while self-post-cross attention operates on the target views. Furthermore, we introduce an incremental inference strategy compatible with KV caching to significantly reduce redundant computation. Our method achieves a PSNR of 29.86 dB on RealEstate10K using only two input views, outperforming LVSM by 0.2 dB, while accelerating training convergence by 2× and inference by 4.4×. It attains state-of-the-art performance across multiple metrics and demonstrates zero-shot generalization to unseen numbers of input views.

Technology Category

Application Category

📝 Abstract
Feedforward models for novel view synthesis (NVS) have recently advanced by transformer-based methods like LVSM, using attention among all input and target views. In this work, we argue that its full self-attention design is suboptimal, suffering from quadratic complexity with respect to the number of input views and rigid parameter sharing among heterogeneous tokens. We propose Efficient-LVSM, a dual-stream architecture that avoids these issues with a decoupled co-refinement mechanism. It applies intra-view self-attention for input views and self-then-cross attention for target views, eliminating unnecessary computation. Efficient-LVSM achieves 29.86 dB PSNR on RealEstate10K with 2 input views, surpassing LVSM by 0.2 dB, with 2x faster training convergence and 4.4x faster inference speed. Efficient-LVSM achieves state-of-the-art performance on multiple benchmarks, exhibits strong zero-shot generalization to unseen view counts, and enables incremental inference with KV-cache, thanks to its decoupled designs.
Problem

Research questions and friction points this paper is trying to address.

novel view synthesis
self-attention
computational complexity
parameter sharing
transformer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient-LVSM
decoupled co-refinement
novel view synthesis
transformer attention
incremental inference
🔎 Similar Papers
No similar papers found.
Xiaosong Jia
Xiaosong Jia
Assistant Professor, Institute of Trustworthy Embodied AI (TEAI), Fudan University
Embodied AIAutonomous DrivingWorld ModelReinforcement Learning
Y
Yihang Sun
Sch. of Computer Science & Sch. of Artificial Intelligence, Shanghai Jiao Tong University
J
Junqi You
Sch. of Computer Science & Sch. of Artificial Intelligence, Shanghai Jiao Tong University
S
Songbur Wong
Sch. of Computer Science & Sch. of Artificial Intelligence, Shanghai Jiao Tong University
Z
Zichen Zou
Institute of Trustworthy Embodied AI (TEAI), Fudan University
Junchi Yan
Junchi Yan
FIAPR & ICML Board Member, SJTU (2018-), SII (2024-), AWS (2019-2022), IBM (2011-2018)
Computational IntelligenceAI4ScienceMachine LearningAutonomous Driving
Zuxuan Wu
Zuxuan Wu
Fudan University
Yu-Gang Jiang
Yu-Gang Jiang
Professor, Fudan University. IEEE & IAPR Fellow
Video AnalysisEmbodied AITrustworthy AI