SEOE: A Scalable and Reliable Semantic Evaluation Framework for Open Domain Event Detection

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current open-domain event detection (ODED) evaluation faces two key bottlenecks: (1) narrow benchmark coverage lacking real-world representativeness, and (2) token-level matching metrics failing to capture semantic equivalence. To address these, we introduce a scalable, multi-domain evaluation benchmark covering seven domains and 564 event types. We further propose a large language model (LLM)-based semantic F1 metric that transcends lexical matching by quantifying fine-grained semantic similarity. Our method incorporates a novel definition of granular semantic equivalence, a low-cost incremental annotation strategy, and an LLM-driven evaluation agent framework. Experiments demonstrate strong domain representativeness of the benchmark and high agreement between semantic F1 and human judgments (Spearman’s ρ = 0.92). The proposed approach significantly improves reliability and interpretability in evaluating state-of-the-art ODED systems, establishing a reproducible, extensible evaluation paradigm for open-domain event detection.

Technology Category

Application Category

📝 Abstract
Automatic evaluation for Open Domain Event Detection (ODED) is a highly challenging task, because ODED is characterized by a vast diversity of un-constrained output labels from various domains. Nearly all existing evaluation methods for ODED usually first construct evaluation benchmarks with limited labels and domain coverage, and then evaluate ODED methods using metrics based on token-level label matching rules. However, this kind of evaluation framework faces two issues: (1) The limited evaluation benchmarks lack representatives of the real world, making it difficult to accurately reflect the performance of various ODED methods in real-world scenarios; (2) Evaluation metrics based on token-level matching rules fail to capture semantic similarity between predictions and golden labels. To address these two problems above, we propose a scalable and reliable Semantic-level Evaluation framework for Open domain Event detection (SEOE) by constructing a more representative evaluation benchmark and introducing a semantic evaluation metric. Specifically, our proposed framework first constructs a scalable evaluation benchmark that currently includes 564 event types covering 7 major domains, with a cost-effective supplementary annotation strategy to ensure the benchmark's representativeness. The strategy also allows for the supplement of new event types and domains in the future. Then, the proposed SEOE leverages large language models (LLMs) as automatic evaluation agents to compute a semantic F1-score, incorporating fine-grained definitions of semantically similar labels to enhance the reliability of the evaluation. Extensive experiments validate the representatives of the benchmark and the reliability of the semantic evaluation metric. Existing ODED methods are thoroughly evaluated, and the error patterns of predictions are analyzed, revealing several insightful findings.
Problem

Research questions and friction points this paper is trying to address.

Limited evaluation benchmarks lack real-world representativeness.
Token-level metrics fail to capture semantic similarity.
Proposes SEOE framework for scalable and semantic evaluation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalable evaluation benchmark with 564 event types
Semantic F1-score using large language models
Cost-effective annotation for future domain expansion
🔎 Similar Papers
No similar papers found.
Yi-Fan Lu
Yi-Fan Lu
北京理工大学
事件抽取
Xian-Ling Mao
Xian-Ling Mao
Beijing Institute of Technology
Web Data MiningInformation ExtractionQA & DialogueTopic ModelingLearn to Hashing
T
Tian Lan
Beijing Institute of Technology
T
Tong Zhang
Beijing Institute of Technology
Y
Yu-Shi Zhu
Beijing Institute of Technology
H
Heyan Huang
Beijing Institute of Technology