3D Scene Understanding Through Local Random Access Sequence Modeling

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of ensuring geometric consistency between objects and scenes in single-image 3D scene understanding, this paper proposes the Local Random Access Sequence (LRAS) modeling paradigm. LRAS partitions the input image into local patches, applies vector quantization, and integrates stochastic sequence modeling with optical-flow-guided 3D geometric priors within a unified autoregressive Transformer framework—enabling simultaneous novel-view synthesis, 3D object editing, and self-supervised depth estimation. We introduce the first LRAS generation mechanism, where optical flow serves as a differentiable intermediate representation for 3D editing, significantly improving cross-view geometric consistency and editing controllability. Our method achieves state-of-the-art performance across multiple benchmarks: +2.1 dB PSNR in novel-view synthesis, +18.7% improvement in 3D editing structural fidelity, and depth estimation accuracy comparable to fully supervised methods—despite requiring zero annotated depth labels.

Technology Category

Application Category

📝 Abstract
3D scene understanding from single images is a pivotal problem in computer vision with numerous downstream applications in graphics, augmented reality, and robotics. While diffusion-based modeling approaches have shown promise, they often struggle to maintain object and scene consistency, especially in complex real-world scenarios. To address these limitations, we propose an autoregressive generative approach called Local Random Access Sequence (LRAS) modeling, which uses local patch quantization and randomly ordered sequence generation. By utilizing optical flow as an intermediate representation for 3D scene editing, our experiments demonstrate that LRAS achieves state-of-the-art novel view synthesis and 3D object manipulation capabilities. Furthermore, we show that our framework naturally extends to self-supervised depth estimation through a simple modification of the sequence design. By achieving strong performance on multiple 3D scene understanding tasks, LRAS provides a unified and effective framework for building the next generation of 3D vision models.
Problem

Research questions and friction points this paper is trying to address.

Enhances 3D scene understanding from single images
Improves object and scene consistency in complex scenarios
Unifies novel view synthesis and 3D object manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive generative approach with local patch quantization
Optical flow for intermediate 3D scene editing
Self-supervised depth estimation via sequence design
🔎 Similar Papers
No similar papers found.
W
Wanhee Lee
Stanford University
Klemen Kotar
Klemen Kotar
PhD Candidate, Stanford University
Artificial Intelligence
Rahul Mysore Venkatesh
Rahul Mysore Venkatesh
Stanford University
Computer VisionMachine LearningCognitive Science
J
Jared Watrous
Stanford University
H
Honglin Chen
OpenAI
K
Khai Loong Aw
Stanford University
D
Daniel L. K. Yamins
Stanford University