RUBIK: A Structured Benchmark for Image Matching across Geometric Challenges

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image matching benchmarks lack systematic modeling of geometric challenges, hindering rigorous evaluation of method robustness under realistic geometric degradations. Method: We introduce the first structured geometric-challenge benchmark for image matching, built upon three orthogonal difficulty dimensions—overlap ratio, scale ratio, and viewpoint angle—and partitioning 16.5K nuScenes image pairs into 33 fine-grained difficulty levels. We propose a multidimensional geometric difficulty grading paradigm enabling the first quantitative characterization of accuracy–latency trade-offs between detector-free and detector-based methods (e.g., detector-free methods achieve 47.3% average success rate but incur 2–8× higher latency). Results: State-of-the-art methods attain only 54.8% success rate under extreme geometric combinations. The benchmark comprehensively evaluates 14 mainstream algorithms (e.g., SuperPoint, LoFTR, MatchFormer), provides a standardized evaluation framework, and is publicly released to advance geometrically robust matching research.

Technology Category

Application Category

📝 Abstract
Camera pose estimation is crucial for many computer vision applications, yet existing benchmarks offer limited insight into method limitations across different geometric challenges. We introduce RUBIK, a novel benchmark that systematically evaluates image matching methods across well-defined geometric difficulty levels. Using three complementary criteria - overlap, scale ratio, and viewpoint angle - we organize 16.5K image pairs from nuScenes into 33 difficulty levels. Our comprehensive evaluation of 14 methods reveals that while recent detector-free approaches achieve the best performance (>47% success rate), they come with significant computational overhead compared to detector-based methods (150-600ms vs. 40-70ms). Even the best performing method succeeds on only 54.8% of the pairs, highlighting substantial room for improvement, particularly in challenging scenarios combining low overlap, large scale differences, and extreme viewpoint changes. Benchmark will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Evaluate image matching across geometric challenges
Assess method performance with varying difficulty levels
Compare detector-free and detector-based computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic geometric difficulty levels
Detector-free vs detector-based methods
Public benchmark for image matching
🔎 Similar Papers
No similar papers found.