CoMatcher: Multi-View Collaborative Feature Matching

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address feature matching ambiguities in complex scenes caused by severe occlusion and extreme viewpoint variations, this paper proposes a multi-view collaborative matching paradigm. Moving beyond conventional pairwise matching, our approach establishes a group-level matching framework that explicitly models cross-view projection consistency and global geometric constraints while integrating complementary contextual information. We introduce a deep multi-view feature matching network coupled with a 3D scene representation learning module to enable end-to-end trajectory optimization. Evaluated under challenging conditions—including heavy occlusion and large viewpoint changes—our method significantly improves matching accuracy and trajectory completeness: matching error is reduced by 28.6%, and trajectory completion rate increases by 34.2% compared to state-of-the-art two-view methods. The proposed framework demonstrates superior robustness and generalization, setting a new benchmark for multi-view geometric reasoning in cluttered and dynamic environments.

Technology Category

Application Category

📝 Abstract
This paper proposes a multi-view collaborative matching strategy for reliable track construction in complex scenarios. We observe that the pairwise matching paradigms applied to image set matching often result in ambiguous estimation when the selected independent pairs exhibit significant occlusions or extreme viewpoint changes. This challenge primarily stems from the inherent uncertainty in interpreting intricate 3D structures based on limited two-view observations, as the 3D-to-2D projection leads to significant information loss. To address this, we introduce CoMatcher, a deep multi-view matcher to (i) leverage complementary context cues from different views to form a holistic 3D scene understanding and (ii) utilize cross-view projection consistency to infer a reliable global solution. Building on CoMatcher, we develop a groupwise framework that fully exploits cross-view relationships for large-scale matching tasks. Extensive experiments on various complex scenarios demonstrate the superiority of our method over the mainstream two-view matching paradigm.
Problem

Research questions and friction points this paper is trying to address.

Address ambiguous estimation in image set matching
Overcome 3D structure uncertainty from limited 2D views
Enhance matching reliability in complex multi-view scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view collaborative matching strategy
Deep multi-view matcher CoMatcher
Cross-view projection consistency utilization
🔎 Similar Papers
No similar papers found.