Augmenting Intra-Modal Understanding in MLLMs for Robust Multimodal Keyphrase Generation

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal keyword generation (MKP) suffers from modality bias and insufficient fine-grained intra-modal feature extraction in multimodal large language models (MLLMs). Method: We propose a robustness-enhanced framework comprising three components: (i) a progressive modality masking strategy to compel deeper semantic structure exploration within images and text; (ii) a gradient-sensitivity-based noisy sample filtering mechanism to dynamically prune low-quality training instances; and (iii) end-to-end joint optimization to simultaneously strengthen intra-modal understanding and cross-modal alignment. Contribution/Results: Our method achieves state-of-the-art performance across multiple MKP benchmarks. It demonstrates superior robustness and generalization under challenging conditions—including noisy inputs, unimodal missingness, and modality misalignment—establishing a novel paradigm for robust multimodal semantic generation.

Technology Category

Application Category

📝 Abstract
Multimodal keyphrase generation (MKP) aims to extract a concise set of keyphrases that capture the essential meaning of paired image-text inputs, enabling structured understanding, indexing, and retrieval of multimedia data across the web and social platforms. Success in this task demands effectively bridging the semantic gap between heterogeneous modalities. While multimodal large language models (MLLMs) achieve superior cross-modal understanding by leveraging massive pretraining on image-text corpora, we observe that they often struggle with modality bias and fine-grained intra-modal feature extraction. This oversight leads to a lack of robustness in real-world scenarios where multimedia data is noisy, along with incomplete or misaligned modalities. To address this problem, we propose AimKP, a novel framework that explicitly reinforces intra-modal semantic learning in MLLMs while preserving cross-modal alignment. AimKP incorporates two core innovations: (i) Progressive Modality Masking, which forces fine-grained feature extraction from corrupted inputs by progressively masking modality information during training; (ii) Gradient-based Filtering, that identifies and discards noisy samples, preventing them from corrupting the model's core cross-modal learning. Extensive experiments validate AimKP's effectiveness in multimodal keyphrase generation and its robustness across different scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhances intra-modal understanding in MLLMs for robust keyphrase generation
Addresses modality bias and fine-grained feature extraction challenges
Improves robustness with noisy, incomplete, or misaligned multimodal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Modality Masking for fine-grained feature extraction
Gradient-based Filtering to discard noisy training samples
Reinforcing intra-modal learning while preserving cross-modal alignment
J
Jiajun Cao
Department of Digital Media Technology, Xiamen University
Qinggang Zhang
Qinggang Zhang
The Hong Kong Polytechnic University
Knowledge GraphsLarge Language ModelsRetrieval-Augmented GenerationText-to-SQL
Y
Yunbo Tang
School of Informatics, Xiamen University
Zhishang Xiang
Zhishang Xiang
Xiamen University
nlp
C
Chang Yang
The Hong Kong Polytechnic University
Jinsong Su
Jinsong Su
Xiamen University
Natural Language ProcessingDeep LearningNeural Machine Translation