Keyword-Centric Prompting for One-Shot Event Detection with Self-Generated Rationale Enhancements

📅 2025-08-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from inaccurate trigger identification and excessive inference in few-shot event detection. To address this, we propose Keyword-Centric Chain-of-Thought prompting (KC-CoT), a novel prompting paradigm that anchors on event keywords to construct discriminative prompt templates. KC-CoT guides the model to autonomously generate candidate triggers and iteratively verify their plausibility across hierarchical reasoning steps, explicitly modeling event detection rules while automatically bridging logical gaps. Unlike conventional in-context learning, KC-CoT transforms trigger identification into an interpretable, multi-step reasoning process, thereby mitigating over-reliance on lexical cues. Under one-shot settings, KC-CoT achieves state-of-the-art F1 scores across multiple benchmark datasets. It establishes a more interpretable, robust, and generalizable prompting framework for few-shot event detection.

Technology Category

Application Category

📝 Abstract
Although the LLM-based in-context learning (ICL) paradigm has demonstrated considerable success across various natural language processing tasks, it encounters challenges in event detection. This is because LLMs lack an accurate understanding of event triggers and tend to make over-interpretation, which cannot be effectively corrected through in-context examples alone. In this paper, we focus on the most challenging one-shot setting and propose KeyCP++, a keyword-centric chain-of-thought prompting approach. KeyCP++ addresses the weaknesses of conventional ICL by automatically annotating the logical gaps between input text and detection results for the demonstrations. Specifically, to generate in-depth and meaningful rationale, KeyCP++ constructs a trigger discrimination prompting template. It incorporates the exemplary triggers (a.k.a keywords) into the prompt as the anchor to simply trigger profiling, let LLM propose candidate triggers, and justify each candidate. These propose-and-judge rationales help LLMs mitigate over-reliance on the keywords and promote detection rule learning. Extensive experiments demonstrate the effectiveness of our approach, showcasing significant advancements in one-shot event detection.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with accurate event trigger understanding
Over-interpretation in event detection lacks correction
One-shot event detection needs improved rationale generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Keyword-centric chain-of-thought prompting approach
Trigger discrimination prompting template
Propose-and-judge rationales mitigate over-reliance
🔎 Similar Papers
No similar papers found.