🤖 AI Summary
Current open-domain event detection (ODED) evaluation faces two key bottlenecks: (1) narrow benchmark coverage lacking real-world representativeness, and (2) token-level matching metrics failing to capture semantic equivalence. To address these, we introduce a scalable, multi-domain evaluation benchmark covering seven domains and 564 event types. We further propose a large language model (LLM)-based semantic F1 metric that transcends lexical matching by quantifying fine-grained semantic similarity. Our method incorporates a novel definition of granular semantic equivalence, a low-cost incremental annotation strategy, and an LLM-driven evaluation agent framework. Experiments demonstrate strong domain representativeness of the benchmark and high agreement between semantic F1 and human judgments (Spearman’s ρ = 0.92). The proposed approach significantly improves reliability and interpretability in evaluating state-of-the-art ODED systems, establishing a reproducible, extensible evaluation paradigm for open-domain event detection.
📝 Abstract
Automatic evaluation for Open Domain Event Detection (ODED) is a highly challenging task, because ODED is characterized by a vast diversity of un-constrained output labels from various domains. Nearly all existing evaluation methods for ODED usually first construct evaluation benchmarks with limited labels and domain coverage, and then evaluate ODED methods using metrics based on token-level label matching rules. However, this kind of evaluation framework faces two issues: (1) The limited evaluation benchmarks lack representatives of the real world, making it difficult to accurately reflect the performance of various ODED methods in real-world scenarios; (2) Evaluation metrics based on token-level matching rules fail to capture semantic similarity between predictions and golden labels. To address these two problems above, we propose a scalable and reliable Semantic-level Evaluation framework for Open domain Event detection (SEOE) by constructing a more representative evaluation benchmark and introducing a semantic evaluation metric. Specifically, our proposed framework first constructs a scalable evaluation benchmark that currently includes 564 event types covering 7 major domains, with a cost-effective supplementary annotation strategy to ensure the benchmark's representativeness. The strategy also allows for the supplement of new event types and domains in the future. Then, the proposed SEOE leverages large language models (LLMs) as automatic evaluation agents to compute a semantic F1-score, incorporating fine-grained definitions of semantically similar labels to enhance the reliability of the evaluation. Extensive experiments validate the representatives of the benchmark and the reliability of the semantic evaluation metric. Existing ODED methods are thoroughly evaluated, and the error patterns of predictions are analyzed, revealing several insightful findings.