π€ AI Summary
This work addresses keypoint detection and description under weak supervision, requiring only binary image-pair labels (same/different scene). Methodologically, it eliminates reliance on hand-crafted transformations, pretrained models, or 3D data. Instead, it constructs multiscale hypercolumn representations from intermediate encoder features and jointly optimizes detectors and descriptors end-to-end via reinforcement learning and an auxiliary contrastive loss. Contributions include: (i) the first framework to jointly learn both keypoint localization and discriminative descriptors under purely binary supervision; (ii) enhanced robustness and generalization through unsupervised feature fusion and reinforcement-based decision making. Evaluated on standard benchmarks, the method matches state-of-the-art fully supervised approaches while drastically reducing annotation cost. The code is publicly available.
π Abstract
We introduce RIPE, an innovative reinforcement learning-based framework for weakly-supervised training of a keypoint extractor that excels in both detection and description tasks. In contrast to conventional training regimes that depend heavily on artificial transformations, pre-generated models, or 3D data, RIPE requires only a binary label indicating whether paired images represent the same scene. This minimal supervision significantly expands the pool of training data, enabling the creation of a highly generalized and robust keypoint extractor.
RIPE utilizes the encoder's intermediate layers for the description of the keypoints with a hyper-column approach to integrate information from different scales. Additionally, we propose an auxiliary loss to enhance the discriminative capability of the learned descriptors.
Comprehensive evaluations on standard benchmarks demonstrate that RIPE simplifies data preparation while achieving competitive performance compared to state-of-the-art techniques, marking a significant advancement in robust keypoint extraction and description. To support further research, we have made our code publicly available at https://github.com/fraunhoferhhi/RIPE.