On Evaluating the Adversarial Robustness of Foundation Models for Multimodal Entity Linking

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic evaluation of visual adversarial attacks against multimodal entity linking (MEL) models, revealing that state-of-the-art models exhibit high vulnerability to visual perturbations in both image-to-text (I2T) and image-text-to-text (IT2T) linking tasks. To address this, we propose LLM-RetLink—a novel framework integrating a large vision model (LVM) and a large language model (LLM), augmented by web retrieval–enhanced, two-stage dynamic entity description generation, which substantially improves robustness against visual adversarial examples. Additionally, we introduce the first dedicated visual adversarial dataset for MEL. Extensive experiments across five benchmark datasets demonstrate that LLM-RetLink achieves accuracy gains of 0.4%–35.7% over baselines, significantly enhancing model stability and generalization under adversarial conditions.

Technology Category

Application Category

📝 Abstract
The explosive growth of multimodal data has driven the rapid development of multimodal entity linking (MEL) models. However, existing studies have not systematically investigated the impact of visual adversarial attacks on MEL models. We conduct the first comprehensive evaluation of the robustness of mainstream MEL models under different adversarial attack scenarios, covering two core tasks: Image-to-Text (I2T) and Image+Text-to-Text (IT2T). Experimental results show that current MEL models generally lack sufficient robustness against visual perturbations. Interestingly, contextual semantic information in input can partially mitigate the impact of adversarial perturbations. Based on this insight, we propose an LLM and Retrieval-Augmented Entity Linking (LLM-RetLink), which significantly improves the model's anti-interference ability through a two-stage process: first, extracting initial entity descriptions using large vision models (LVMs), and then dynamically generating candidate descriptive sentences via web-based retrieval. Experiments on five datasets demonstrate that LLM-RetLink improves the accuracy of MEL by 0.4%-35.7%, especially showing significant advantages under adversarial conditions. This research highlights a previously unexplored facet of MEL robustness, constructs and releases the first MEL adversarial example dataset, and sets the stage for future work aimed at strengthening the resilience of multimodal systems in adversarial environments.
Problem

Research questions and friction points this paper is trying to address.

Evaluating adversarial robustness of multimodal entity linking models
Assessing impact of visual attacks on Image-to-Text linking tasks
Improving model resilience against visual adversarial perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage LLM and retrieval-augmented framework
Leverages large vision models for initial extraction
Dynamic candidate generation via web retrieval
🔎 Similar Papers
No similar papers found.