Reflective Agreement: Combining Self-Mixture of Agents with a Sequence Tagger for Robust Event Extraction

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Event extraction faces a fundamental trade-off between low recall of discriminative models and severe hallucination in generative models. To address this, we propose ARIS, the first framework that deeply integrates a self-mixing agent with a discriminative sequence tagger. ARIS synergistically mitigates hallucination via model-consensus reasoning, confidence-based filtering, and LLM-driven reflective refinement. Furthermore, it introduces decomposition-based instruction tuning to explicitly encode event structure, thereby enhancing the LLM’s capacity for structured understanding. Evaluated on ACE2005, E2E, and RAMS benchmarks, ARIS substantially outperforms state-of-the-art methods, achieving simultaneous improvements in both precision and recall for trigger identification and argument extraction. The framework delivers robust, high-coverage, end-to-end event extraction without compromising fidelity or structural integrity.

Technology Category

Application Category

📝 Abstract
Event Extraction (EE) involves automatically identifying and extracting structured information about events from unstructured text, including triggers, event types, and arguments. Traditional discriminative models demonstrate high precision but often exhibit limited recall, particularly for nuanced or infrequent events. Conversely, generative approaches leveraging Large Language Models (LLMs) provide higher semantic flexibility and recall but suffer from hallucinations and inconsistent predictions. To address these challenges, we propose Agreement-based Reflective Inference System (ARIS), a hybrid approach combining a Self Mixture of Agents with a discriminative sequence tagger. ARIS explicitly leverages structured model consensus, confidence-based filtering, and an LLM reflective inference module to reliably resolve ambiguities and enhance overall event prediction quality. We further investigate decomposed instruction fine-tuning for enhanced LLM event extraction understanding. Experiments demonstrate our approach outperforms existing state-of-the-art event extraction methods across three benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Improving recall for nuanced events in extraction
Reducing hallucinations in generative event extraction
Resolving ambiguities to enhance prediction quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Self-Mixture of Agents with sequence tagger
Structured model consensus and confidence filtering
LLM reflective inference module for ambiguity resolution
🔎 Similar Papers
No similar papers found.
F
Fatemeh Haji
Secure AI and Autonomy Lab, University of Texas at San Antonio
Mazal Bethany
Mazal Bethany
University of Texas at San Antonio (UTSA)
Artificial IntelligenceLarge Language ModelsAI Security
C
Cho-Yu Jason Chiang
Peraton Labs
Anthony Rios
Anthony Rios
Associate Professor in Information Systems and Cyber Security
Natural Language ProcessingBiomedical InformaticsComputational Social ScienceSocial Computing
P
Peyman Najafirad
Secure AI and Autonomy Lab, University of Texas at San Antonio