Multi-Sensor Matching with HyperNetworks

πŸ“… 2026-01-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited robustness of multimodal (e.g., visible and infrared) image patch matching under domain shift by proposing a lightweight descriptor learning architecture that, for the first time, integrates hypernetworks with conditional instance normalization. The hypernetwork enables channel-wise adaptive scaling and shifting, while conditional instance normalization facilitates modality-specific feature adaptation. This design significantly enhances robustness to cross-modal appearance variations without compromising inference efficiency. To further support research in cross-domain multimodal matching, the authors introduce and release a new dataset, GAP-VIR. Extensive experiments demonstrate that the proposed method achieves state-of-the-art or comparable performance on standard VIS–NIR and other VIS–IR benchmarks, while maintaining lower computational overhead.

Technology Category

Application Category

πŸ“ Abstract
Hypernetworks are models that generate or modulate the weights of another network. They provide a flexible mechanism for injecting context and task conditioning and have proven broadly useful across diverse applications without significant increases in model size. We leverage hypernetworks to improve multimodal patch matching by introducing a lightweight descriptor-learning architecture that augments a Siamese CNN with (i) hypernetwork modules that compute adaptive, per-channel scaling and shifting and (ii) conditional instance normalization that provides modality-specific adaptation (e.g., visible vs. infrared, VIS-IR) in shallow layers. This combination preserves the efficiency of descriptor-based methods during inference while increasing robustness to appearance shifts. Trained with a triplet loss and hard-negative mining, our approach achieves state-of-the-art results on VIS-NIR and other VIS-IR benchmarks and matches or surpasses prior methods on additional datasets, despite their higher inference cost. To spur progress on domain shift, we also release GAP-VIR, a cross-platform (ground/aerial) VIS-IR patch dataset with 500K pairs, enabling rigorous evaluation of cross-domain generalization and adaptation.
Problem

Research questions and friction points this paper is trying to address.

multi-sensor matching
cross-domain generalization
VIS-IR
appearance shift
multimodal patch matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

HyperNetworks
multimodal matching
conditional instance normalization
descriptor learning
domain adaptation
πŸ”Ž Similar Papers
No similar papers found.
E
Eli Passov
Faculty of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
N
N. Netanyahu
Faculty of Computer Science, Bar-Ilan University, Ramat-Gan, Israel
Yosi Keller
Yosi Keller
Faculty of Engineering, Bar Ilan University
image processingsignal processingmachine learningdimensionality reduction