🤖 AI Summary
To address the scarcity of dense ground-truth disparity annotations and severe domain shift in real-world zero-shot stereo matching, this paper proposes an end-to-end framework that requires no real disparity labels. It synergistically integrates synthetic data, monocular images, and a small set of real images; leverages monocular depth estimation and diffusion models to generate high-fidelity pseudo-disparity labels; and introduces pseudo-monocular depth supervision alongside a dynamic scale- and shift-invariant loss to enhance cross-domain generalization. Notably, this work is the first to incorporate vision foundation models—specifically ViT-based encoders—as robust feature extractors for stereo matching, and the first to synthesize dense stereo training data directly from single-view images. Extensive experiments demonstrate significant improvements over state-of-the-art methods across multiple benchmarks, achieving zero-shot SOTA performance—particularly under annotation-scarce and cross-domain settings.
📝 Abstract
Stereo matching methods rely on dense pixel-wise ground truth labels, which are laborious to obtain, especially for real-world datasets. The scarcity of labeled data and domain gaps between synthetic and real-world images also pose notable challenges. In this paper, we propose a novel framework, extbf{BooSTer}, that leverages both vision foundation models and large-scale mixed image sources, including synthetic, real, and single-view images. First, to fully unleash the potential of large-scale single-view images, we design a data generation strategy combining monocular depth estimation and diffusion models to generate dense stereo matching data from single-view images. Second, to tackle sparse labels in real-world datasets, we transfer knowledge from monocular depth estimation models, using pseudo-mono depth labels and a dynamic scale- and shift-invariant loss for additional supervision. Furthermore, we incorporate vision foundation model as an encoder to extract robust and transferable features, boosting accuracy and generalization. Extensive experiments on benchmark datasets demonstrate the effectiveness of our approach, achieving significant improvements in accuracy over existing methods, particularly in scenarios with limited labeled data and domain shifts.