π€ AI Summary
This work addresses the limited robustness of multimodal (e.g., visible and infrared) image patch matching under domain shift by proposing a lightweight descriptor learning architecture that, for the first time, integrates hypernetworks with conditional instance normalization. The hypernetwork enables channel-wise adaptive scaling and shifting, while conditional instance normalization facilitates modality-specific feature adaptation. This design significantly enhances robustness to cross-modal appearance variations without compromising inference efficiency. To further support research in cross-domain multimodal matching, the authors introduce and release a new dataset, GAP-VIR. Extensive experiments demonstrate that the proposed method achieves state-of-the-art or comparable performance on standard VISβNIR and other VISβIR benchmarks, while maintaining lower computational overhead.
π Abstract
Hypernetworks are models that generate or modulate the weights of another network. They provide a flexible mechanism for injecting context and task conditioning and have proven broadly useful across diverse applications without significant increases in model size. We leverage hypernetworks to improve multimodal patch matching by introducing a lightweight descriptor-learning architecture that augments a Siamese CNN with (i) hypernetwork modules that compute adaptive, per-channel scaling and shifting and (ii) conditional instance normalization that provides modality-specific adaptation (e.g., visible vs. infrared, VIS-IR) in shallow layers. This combination preserves the efficiency of descriptor-based methods during inference while increasing robustness to appearance shifts. Trained with a triplet loss and hard-negative mining, our approach achieves state-of-the-art results on VIS-NIR and other VIS-IR benchmarks and matches or surpasses prior methods on additional datasets, despite their higher inference cost. To spur progress on domain shift, we also release GAP-VIR, a cross-platform (ground/aerial) VIS-IR patch dataset with 500K pairs, enabling rigorous evaluation of cross-domain generalization and adaptation.