🤖 AI Summary
This work addresses key limitations of autoregressive approaches in monocular depth estimation—namely, the substantial modality gap between RGB and depth, inefficient pixel-wise generation, and instability in sequential prediction. To overcome these challenges, the authors propose a coarse-to-fine fractal autoregressive generation framework that integrates fractal recursive visual autoregressive units, multi-scale VCFR feature fusion, a conditional continuous diffusion denoising mechanism, and an uncertainty-aware multi-sample consensus aggregation strategy. This design effectively bypasses the bottlenecks of conventional discrete quantization and inefficient generation. The method achieves significant improvements in accuracy, computational efficiency, and prediction stability on standard benchmarks, while also enabling pixel-level reliability estimation.
📝 Abstract
Monocular depth estimation can benefit from autoregressive (AR) generation, but direct AR modeling is hindered by the modality gap between RGB and depth, inefficient pixel-wise generation, and instability in continuous depth prediction. We propose a Fractal Visual Autoregressive Diffusion framework that reformulates depth estimation as a coarse-to-fine, next-scale autoregressive generation process. A VCFR module fuses multi-scale image features with current depth predictions to improve cross-modal conditioning, while a conditional denoising diffusion loss models depth distributions directly in continuous space and mitigates errors caused by discrete quantization. To improve computational efficiency, we organize the scale-wise generators into a fractal recursive architecture, reusing a base visual AR unit in a self-similar hierarchy. We further introduce an uncertainty-aware robust consensus aggregation scheme for multi-sample inference to improve fusion stability and provide a practical pixel-wise reliability estimate. Experiments on standard benchmarks demonstrate strong performance and validate the effectiveness of the proposed design.