🤖 AI Summary
In multi-turn dialogue, user preference identification suffers from high annotation costs and severe error propagation—termed the “annotation crisis.” To address this, we propose IterChat, a framework that decomposes multi-turn preference extraction into iterative single-turn processes. We introduce an attributed historical preferences data format that explicitly models both historical preferences and their associated attributes, substantially reducing annotation difficulty and ambiguity. Leveraging GPT-4, we predefine domain-specific slots and generate high-quality, diverse dialogue data via random sampling and few-shot prompting. The preference extraction model is then fine-tuned on this data. Experiments demonstrate that our format achieves higher accuracy than conventional multi-turn formats, improves annotation efficiency by 28.4%, mitigates error propagation, and enhances model generalization and deployment reliability.
📝 Abstract
Identifying user preferences in dialogue systems is a pivotal aspect of providing satisfying services. Current research shows that using large language models (LLMs) to fine-tune a task-specific preference extractor yields excellent results in terms of accuracy and generalization. However, the primary challenge stems from the inherent difficulty in obtaining high-quality labeled multi-turn dialogue data. Accurately tracking user preference transitions across turns not only demands intensive domain expertise and contextual consistency maintenance for annotators (termed extbf{``Annotating Disaster''}) but also complicates model training due to error propagation in sequential dependency learning. Inspired by the observation that multi-turn preference extraction can be decomposed into iterative executions of one-turn extraction processes. We propose a novel dialogue data generation framework named extbf{IterChat}. First, we construct a new data format that categorizes the dialogue data into attributed historical preferences and one-turn dialogues. This reduces the probability of annotation errors and improves annotation efficiency. Then, to generate a high-quality and diverse dialogue dataset, we adopt GPT4 to pre-define the preference slots in the target preference extractor task and then randomly sample the subset of the slots and their corresponding schema values to create the dialogue datasets. Experimental results indicate that fine-tuning or only few-shot prompting with the new dialogue format yields superior performance compared to the original multi-turn dialogues. Additionally, the new data format improves annotator efficiency with a win rate of 28.4% higher than the original multi-turn dialogues.