🤖 AI Summary
To address the degradation of discriminative capability in backward-compatible learning (BCL) caused by overly stringent feature alignment between old and new models, this paper proposes a backfilling-free relaxed alignment framework. The core innovation is a novel prototype perturbation mechanism, introducing two adaptive strategies—Neighborhood-Driven Prototype Perturbation (NDPP) and Optimization-Driven Prototype Perturbation (ODPP)—that dynamically adjust pseudo-old prototypes based on the joint feature distribution of old and new models, thereby mitigating discriminability loss induced by rigid alignment constraints. The method integrates joint feature space modeling with end-to-end backward-compatible training. Extensive experiments on landmark and product retrieval benchmarks demonstrate significant improvements over state-of-the-art BCL methods: it achieves superior retrieval accuracy for the new model while maintaining high backward compatibility, all without requiring fine-tuning of the old model.
📝 Abstract
The traditional paradigm to update retrieval models requires re-computing the embeddings of the gallery data, a time-consuming and computationally intensive process known as backfilling. To circumvent backfilling, Backward-Compatible Learning (BCL) has been widely explored, which aims to train a new model compatible with the old one. Many previous works focus on effectively aligning the embeddings of the new model with those of the old one to enhance the backward-compatibility. Nevertheless, such strong alignment constraints would compromise the discriminative ability of the new model, particularly when different classes are closely clustered and hard to distinguish in the old feature space. To address this issue, we propose to relax the constraints by introducing perturbations to the old feature prototypes. This allows us to align the new feature space with a pseudo-old feature space defined by these perturbed prototypes, thereby preserving the discriminative ability of the new model in backward-compatible learning. We have developed two approaches for calculating the perturbations: Neighbor-Driven Prototype Perturbation (NDPP) and Optimization-Driven Prototype Perturbation (ODPP). Particularly, they take into account the feature distributions of not only the old but also the new models to obtain proper perturbations along with new model updating. Extensive experiments on the landmark and commodity datasets demonstrate that our approaches perform favorably against state-of-the-art BCL algorithms.