Stepwise Schema-Guided Prompting Framework with Parameter Efficient Instruction Tuning for Multimedia Event Extraction

📅 2025-06-30
🏛️ IEEE International Conference on Multimedia and Expo
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal Event Extraction (MEE) faces two key challenges: complex structural modeling and severe scarcity of fine-grained multimodal alignment annotations. To address these, we propose a stepwise pattern-guided prompting framework that decouples event type identification and argument role filling into two sequential prompting stages, and introduces text-bridged localization to achieve cross-modal fine-grained alignment. To alleviate the data bottleneck, we construct the first weakly aligned multimodal event dataset. Building upon the LLaVA-v1.5-7B foundation model, we employ LoRA for parameter-efficient instruction tuning, integrating multi-step prompting with bridged grounding. On the M2E2 benchmark, our method achieves +5.8 points in event detection F1 and +8.4 points in argument extraction F1 over prior state-of-the-art methods, demonstrating substantial improvements in both accuracy and robustness.

Technology Category

Application Category

📝 Abstract
Multimedia Event Extraction (MEE) has become an important task in information extraction research as news today increasingly prefers to contain multimedia content. Current MEE works mainly face two challenges: (1) Inadequate extraction framework modeling for handling complex and flexible multimedia event structure; (2) The absence of multimodal-aligned training data for effective knowledge transfer to MEE task. In this work, we propose a Stepwise Schema-Guided Prompting Framework (SSGPF) using Multimodal Large Language Model (MLLM) as backbone for adaptive structure capturing to solve MEE task. At the initial step of SSGPF, we design Event Type Schema Guided Prompting (ETSGP) for event detection, then we devise Argument Role Schema Guided Prompting (ARSGP) that contains multi-step prompts with text-bridged grounding technique for argument extraction. We construct a weakly-aligned multimodal event labeled dataset based on existing unimodal event annotations, then conduct parameter efficient instruction tuning with LoRA on LLaVA-v1.5-7B under SSGPF. Experiments on the M2E2 benchmark demonstrate that SSGPF significantly outperforms current SOTA baselines by 5.8 percent F1 on event detection and 8.4 percent F1 on argument extraction.
Problem

Research questions and friction points this paper is trying to address.

Extracts events from multimedia content using structured prompting
Addresses lack of multimodal training data for event extraction
Improves event detection and argument extraction accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stepwise Schema-Guided Prompting Framework for adaptive structure
Parameter efficient instruction tuning with LoRA on LLaVA
Weakly-aligned multimodal dataset from unimodal annotations
🔎 Similar Papers
No similar papers found.
X
Xiang Yuan
School of Software and Microelectronics, Peking University, Beijing, China
X
Xinrong Chen
School of Software and Microelectronics, Peking University, Beijing, China
Haochen Li
Haochen Li
Tsinghua university
cell-cell communicationsingle-cell genomicsspatial transcriptomics
H
Hang Yang
Baidu Inc., Beijing, China
G
Guanyu Wang
School of Software and Microelectronics, Peking University, Beijing, China
W
Weiping Li
School of Software and Microelectronics, Peking University, Beijing, China
Tong Mo
Tong Mo
AI Research Engineer at Huawei Canada
Reinforcement LearningKeywork Spotting