🤖 AI Summary
Existing retrieval-based in-context learning (ICL) methods for relation extraction over-rely on lexical or sentence-level linguistic similarity, neglecting deep semantic structure matching and thus prone to erroneous relation predictions. This work introduces Abstract Meaning Representation (AMR) into ICL for the first time, proposing a structured semantic similarity metric grounded in AMR graphs. It enables precise example retrieval via AMR parsing followed by graph edit distance or AMR graph embedding similarity, and integrates retrieved examples to enhance prompt construction. The method is fully unsupervised and achieves significant improvements over baselines across four English relation extraction datasets. Under supervised settings, it attains state-of-the-art performance on three datasets and matches the best on one, demonstrating superior logical consistency and generalization from structure-aware retrieval. The core contribution is the first AMR-driven, structure-aware retrieval framework for ICL, advancing context learning from shallow surface matching toward deep semantic alignment.
📝 Abstract
Existing in-context learning (ICL) methods for relation extraction (RE) often prioritize language similarity over structural similarity, which can lead to overlooking entity relationships. To address this, we propose an AMR-enhanced retrieval-based ICL method for RE. Our model retrieves in-context examples based on semantic structure similarity between task inputs and training samples. Evaluations on four standard English RE datasets show that our model outperforms baselines in the unsupervised setting across all datasets. In the supervised setting, it achieves state-of-the-art results on three datasets and competitive results on the fourth.