🤖 AI Summary
To address the challenge of ensuring geometric consistency between objects and scenes in single-image 3D scene understanding, this paper proposes the Local Random Access Sequence (LRAS) modeling paradigm. LRAS partitions the input image into local patches, applies vector quantization, and integrates stochastic sequence modeling with optical-flow-guided 3D geometric priors within a unified autoregressive Transformer framework—enabling simultaneous novel-view synthesis, 3D object editing, and self-supervised depth estimation. We introduce the first LRAS generation mechanism, where optical flow serves as a differentiable intermediate representation for 3D editing, significantly improving cross-view geometric consistency and editing controllability. Our method achieves state-of-the-art performance across multiple benchmarks: +2.1 dB PSNR in novel-view synthesis, +18.7% improvement in 3D editing structural fidelity, and depth estimation accuracy comparable to fully supervised methods—despite requiring zero annotated depth labels.
📝 Abstract
3D scene understanding from single images is a pivotal problem in computer vision with numerous downstream applications in graphics, augmented reality, and robotics. While diffusion-based modeling approaches have shown promise, they often struggle to maintain object and scene consistency, especially in complex real-world scenarios. To address these limitations, we propose an autoregressive generative approach called Local Random Access Sequence (LRAS) modeling, which uses local patch quantization and randomly ordered sequence generation. By utilizing optical flow as an intermediate representation for 3D scene editing, our experiments demonstrate that LRAS achieves state-of-the-art novel view synthesis and 3D object manipulation capabilities. Furthermore, we show that our framework naturally extends to self-supervised depth estimation through a simple modification of the sequence design. By achieving strong performance on multiple 3D scene understanding tasks, LRAS provides a unified and effective framework for building the next generation of 3D vision models.