🤖 AI Summary
Multimodal keyword generation (MKP) suffers from modality bias and insufficient fine-grained intra-modal feature extraction in multimodal large language models (MLLMs).
Method: We propose a robustness-enhanced framework comprising three components: (i) a progressive modality masking strategy to compel deeper semantic structure exploration within images and text; (ii) a gradient-sensitivity-based noisy sample filtering mechanism to dynamically prune low-quality training instances; and (iii) end-to-end joint optimization to simultaneously strengthen intra-modal understanding and cross-modal alignment.
Contribution/Results: Our method achieves state-of-the-art performance across multiple MKP benchmarks. It demonstrates superior robustness and generalization under challenging conditions—including noisy inputs, unimodal missingness, and modality misalignment—establishing a novel paradigm for robust multimodal semantic generation.
📝 Abstract
Multimodal keyphrase generation (MKP) aims to extract a concise set of keyphrases that capture the essential meaning of paired image-text inputs, enabling structured understanding, indexing, and retrieval of multimedia data across the web and social platforms. Success in this task demands effectively bridging the semantic gap between heterogeneous modalities. While multimodal large language models (MLLMs) achieve superior cross-modal understanding by leveraging massive pretraining on image-text corpora, we observe that they often struggle with modality bias and fine-grained intra-modal feature extraction. This oversight leads to a lack of robustness in real-world scenarios where multimedia data is noisy, along with incomplete or misaligned modalities. To address this problem, we propose AimKP, a novel framework that explicitly reinforces intra-modal semantic learning in MLLMs while preserving cross-modal alignment. AimKP incorporates two core innovations: (i) Progressive Modality Masking, which forces fine-grained feature extraction from corrupted inputs by progressively masking modality information during training; (ii) Gradient-based Filtering, that identifies and discards noisy samples, preventing them from corrupting the model's core cross-modal learning. Extensive experiments validate AimKP's effectiveness in multimodal keyphrase generation and its robustness across different scenarios.