Multi-Scale Superpatch Matching using Dual Superpixel Descriptors

📅 2020-03-09
🏛️ Pattern Recognition Letters
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing superpixel neighborhood descriptors struggle to effectively model boundary contours, leading to insufficient robustness in cross-scale matching. This paper proposes a multi-scale superblock matching framework featuring a novel appearance-geometry coupled dual-superpixel descriptor, enabling semantically consistent cross-scale superblock representation. The method integrates superpixel segmentation, multi-scale pyramid sampling, dual-stream feature embedding, and differentiable matching optimization. Evaluated on benchmarks including HPatches, it significantly improves matching accuracy and recall under large viewpoint changes, strong illumination variations, and motion blur, while achieving higher inference efficiency than SIFT+RANSAC. Key contributions are: (1) a boundary-aware dual-stream superpixel descriptor mechanism that jointly encodes appearance and geometric structure; and (2) an end-to-end differentiable multi-scale matching paradigm that unifies feature learning and correspondence optimization.
Problem

Research questions and friction points this paper is trying to address.

Addresses irregular superpixel decomposition for pattern matching
Improves superpixel descriptors by capturing contour structure information
Enables robust multi-scale pattern matching in image datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual superpatch descriptor combining region and contour features
Multi-scale non-local matching framework across resolutions
Enhanced pattern matching using superpixel interface structures
🔎 Similar Papers
No similar papers found.