🤖 AI Summary
Multimedia event extraction faces significant challenges, including scarce annotated data, difficulties in cross-modal semantic alignment, and insufficient learning of structured representations. This work proposes a relation-aware multi-task progressive learning framework that introduces, for the first time, a staged training strategy to integrate heterogeneous supervisory signals from unimodal event extraction and multimodal relation extraction. By leveraging this approach, the model learns shared cross-modal event representations without requiring end-to-end annotations. The method combines vision-language models with a unified event schema, substantially improving performance on both event mention identification and argument role extraction. Consistent and significant gains are demonstrated across multiple vision-language models on the M2E2 benchmark.
📝 Abstract
Multimedia Event Extraction (MEE) aims to identify events and their arguments from documents that contain both text and images. It requires grounding event semantics across different modalities. Progress in MEE is limited by the lack of annotated training data. M2E2 is the only established benchmark, but it provides annotations only for evaluation. This makes direct supervised training impractical. Existing methods mainly rely on cross-modal alignment or inference-time prompting with Vision--Language Models (VLMs). These approaches do not explicitly learn structured event representations and often produce weak argument grounding in multimodal settings. To address these limitations, we propose RMPL, a Relation-aware Multi-task Progressive Learning framework for MEE under low-resource conditions. RMPL incorporates heterogeneous supervision from unimodal event extraction and multimedia relation extraction with stage-wise training. The model is first trained with a unified schema to learn shared event-centric representations across modalities. It is then fine-tuned for event mention identification and argument role extraction using mixed textual and visual data. Experiments on the M2E2 benchmark with multiple VLMs show consistent improvements across different modality settings.