π€ AI Summary
This paper addresses the long-standing disconnection between wide-baseline matching and optical flow estimation by proposing the first unified dense pixel correspondence framework. Methodologically, it replaces conventional multi-level cost volumes with a lightweight Transformer that directly regresses 2D optical flow fields (u, v), supervised exclusively by co-visible pixels and optimized end-to-end for both tasks jointly. Theoretically and empirically, this unified modeling is shownβ for the first timeβto simultaneously improve performance on both optical flow and wide-baseline matching. In comparisons against state-of-the-art methods, our approach achieves a 28% improvement in optical flow accuracy over UniMatch and reduces wide-baseline matching error by 62% relative to RoMa, while running 6.7Γ faster. This work establishes an efficient, concise, and highly generalizable unified paradigm for generic dense correspondence estimation.
π Abstract
Dense image correspondence is central to many applications, such as visual odometry, 3D reconstruction, object association, and re-identification. Historically, dense correspondence has been tackled separately for wide-baseline scenarios and optical flow estimation, despite the common goal of matching content between two images. In this paper, we develop a Unified Flow&Matching model (UFM), which is trained on unified data for pixels that are co-visible in both source and target images. UFM uses a simple, generic transformer architecture that directly regresses the (u,v) flow. It is easier to train and more accurate for large flows compared to the typical coarse-to-fine cost volumes in prior work. UFM is 28% more accurate than state-of-the-art flow methods (Unimatch), while also having 62% less error and 6.7x faster than dense wide-baseline matchers (RoMa). UFM is the first to demonstrate that unified training can outperform specialized approaches across both domains. This result enables fast, general-purpose correspondence and opens new directions for multi-modal, long-range, and real-time correspondence tasks.