🤖 AI Summary
To address the limited diversity, syntactic redundancy, and semantic inconsistency of training samples generated by large language models (LLMs) for relation extraction (RE), this paper proposes a data augmentation method that jointly optimizes diversity and accuracy. First, it introduces Direct Preference Optimization (DPO) to RE sample generation—marking the first application of DPO in this domain—to explicitly model structural and lexical diversity via preference-based optimization. Second, it designs an instruction-guided in-context learning (ICL) prompting strategy to enforce semantic fidelity and relational correctness during generation. Extensive experiments across multiple standard RE benchmarks demonstrate that the augmented samples significantly improve the performance of lightweight RE models, outperforming LLM zero-shot inference. The proposed framework establishes a novel, efficient, and controllable paradigm for low-resource RE, balancing generative flexibility with task-specific constraints.
📝 Abstract
Using Large Language Models (LLMs) to generate training data can potentially be a preferable way to improve zero or few-shot NLP tasks. However, many problems remain to be investigated for this direction. For the task of Relation Extraction (RE), we find that samples generated by directly prompting LLMs may easily have high structural similarities with each other. They tend to use a limited variety of phrasing while expressing the relation between a pair of entities. Therefore, in this paper, we study how to effectively improve the diversity of the training samples generated with LLMs for RE, while also maintaining their correctness. We first try to make the LLMs produce dissimilar samples by directly giving instructions in In-Context Learning (ICL) prompts. Then, we propose an approach to fine-tune LLMs for diversity training sample generation through Direct Preference Optimization (DPO). Our experiments on commonly used RE datasets show that both attempts can improve the quality of the generated training data. We also find that comparing with directly performing RE with an LLM, training a non-LLM RE model with its generated samples may lead to better performance.