Towards Adversarially Robust Deep Metric Learning

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes the severe vulnerability of deep metric learning (DML) to adversarial attacks in clustering inference scenarios and demonstrates the poor transferability of existing classification-robust methods to DML. To address this, we propose Ensemble Adversarial Training (EAT), the first dedicated adversarial training framework for DML. EAT jointly optimizes multi-model ensembling, diversity regularization, and self-distilled feature alignment, while preserving metric-structure constraints (e.g., triplet or N-pair losses) and enabling robust statistical sharing. Evaluated on CUB200, Cars196, and Stanford Online Products using ResNet and BN-Inception backbones, EAT consistently outperforms adapted classification-robust baselines: it achieves an average 12.7% improvement in robust clustering inference accuracy. This work establishes the first systematic solution for enhancing DML robustness against adversarial perturbations.

Technology Category

Application Category

📝 Abstract
Deep Metric Learning (DML) has shown remarkable successes in many domains by taking advantage of powerful deep neural networks. Deep neural networks are prone to adversarial attacks and could be easily fooled by adversarial examples. The current progress on this robustness issue is mainly about deep classification models but pays little attention to DML models. Existing works fail to thoroughly inspect the robustness of DML and neglect an important DML scenario, the clustering-based inference. In this work, we first point out the robustness issue of DML models in clustering-based inference scenarios. We find that, for the clustering-based inference, existing defenses designed DML are unable to be reused and the adaptions of defenses designed for deep classification models cannot achieve satisfactory robustness performance. To alleviate the hazard of adversarial examples, we propose a new defense, the Ensemble Adversarial Training (EAT), which exploits ensemble learning and adversarial training. EAT promotes the diversity of the ensemble, encouraging each model in the ensemble to have different robustness features, and employs a self-transferring mechanism to make full use of the robustness statistics of the whole ensemble in the update of every single model. We evaluate the EAT method on three widely-used datasets with two popular model architectures. The results show that the proposed EAT method greatly outperforms the adaptions of defenses designed for deep classification models.
Problem

Research questions and friction points this paper is trying to address.

Deep Metric Learning
Adversarial Attacks
Defense Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble Adversarial Training
Deep Metric Learning
Defense against Adversarial Attacks
🔎 Similar Papers
No similar papers found.