🤖 AI Summary
Multimodal Event Extraction (MEE) faces two key challenges: complex structural modeling and severe scarcity of fine-grained multimodal alignment annotations. To address these, we propose a stepwise pattern-guided prompting framework that decouples event type identification and argument role filling into two sequential prompting stages, and introduces text-bridged localization to achieve cross-modal fine-grained alignment. To alleviate the data bottleneck, we construct the first weakly aligned multimodal event dataset. Building upon the LLaVA-v1.5-7B foundation model, we employ LoRA for parameter-efficient instruction tuning, integrating multi-step prompting with bridged grounding. On the M2E2 benchmark, our method achieves +5.8 points in event detection F1 and +8.4 points in argument extraction F1 over prior state-of-the-art methods, demonstrating substantial improvements in both accuracy and robustness.
📝 Abstract
Multimedia Event Extraction (MEE) has become an important task in information extraction research as news today increasingly prefers to contain multimedia content. Current MEE works mainly face two challenges: (1) Inadequate extraction framework modeling for handling complex and flexible multimedia event structure; (2) The absence of multimodal-aligned training data for effective knowledge transfer to MEE task. In this work, we propose a Stepwise Schema-Guided Prompting Framework (SSGPF) using Multimodal Large Language Model (MLLM) as backbone for adaptive structure capturing to solve MEE task. At the initial step of SSGPF, we design Event Type Schema Guided Prompting (ETSGP) for event detection, then we devise Argument Role Schema Guided Prompting (ARSGP) that contains multi-step prompts with text-bridged grounding technique for argument extraction. We construct a weakly-aligned multimodal event labeled dataset based on existing unimodal event annotations, then conduct parameter efficient instruction tuning with LoRA on LLaVA-v1.5-7B under SSGPF. Experiments on the M2E2 benchmark demonstrate that SSGPF significantly outperforms current SOTA baselines by 5.8 percent F1 on event detection and 8.4 percent F1 on argument extraction.