🤖 AI Summary
This study investigates the feasibility of employing large language models (LLMs) for frame semantic parsing (FSP) without fine-tuning, specifically targeting frame identification (FI) and frame semantic role labeling (FSRL) in the domain of violent events. Method: We propose a fully automated, FrameNet-based in-context learning (ICL) prompting method that constructs task-specific prompts using frame definitions and annotated examples, tailored to six mainstream LLMs. This approach eliminates reliance on labeled data and computational resources required for supervised fine-tuning. Contribution/Results: Our zero-training, highly transferable FSP paradigm achieves 94.3% FI F1-score and 77.4% FSRL F1-score on the violent-events subset—performance competitive with supervised fine-tuning baselines. These results demonstrate the strong generalization capability of ICL for structured semantic parsing tasks, establishing a novel, resource-efficient paradigm for FSP.
📝 Abstract
Frame Semantic Parsing (FSP) entails identifying predicates and labeling their arguments according to Frame Semantics. This paper investigates the use of In-Context Learning (ICL) with Large Language Models (LLMs) to perform FSP without model fine-tuning. We propose a method that automatically generates task-specific prompts for the Frame Identification (FI) and Frame Semantic Role Labeling (FSRL) subtasks, relying solely on the FrameNet database. These prompts, constructed from frame definitions and annotated examples, are used to guide six different LLMs. Experiments are conducted on a subset of frames related to violent events. The method achieves competitive results, with F1 scores of 94.3% for FI and 77.4% for FSRL. The findings suggest that ICL offers a practical and effective alternative to traditional fine-tuning for domain-specific FSP tasks.