Enhancing the Preference Extractor in Multi-turn Dialogues: From Annotating Disasters to Accurate Preference Extraction

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-turn dialogue, user preference identification suffers from high annotation costs and severe error propagation—termed the “annotation crisis.” To address this, we propose IterChat, a framework that decomposes multi-turn preference extraction into iterative single-turn processes. We introduce an attributed historical preferences data format that explicitly models both historical preferences and their associated attributes, substantially reducing annotation difficulty and ambiguity. Leveraging GPT-4, we predefine domain-specific slots and generate high-quality, diverse dialogue data via random sampling and few-shot prompting. The preference extraction model is then fine-tuned on this data. Experiments demonstrate that our format achieves higher accuracy than conventional multi-turn formats, improves annotation efficiency by 28.4%, mitigates error propagation, and enhances model generalization and deployment reliability.

Technology Category

Application Category

📝 Abstract
Identifying user preferences in dialogue systems is a pivotal aspect of providing satisfying services. Current research shows that using large language models (LLMs) to fine-tune a task-specific preference extractor yields excellent results in terms of accuracy and generalization. However, the primary challenge stems from the inherent difficulty in obtaining high-quality labeled multi-turn dialogue data. Accurately tracking user preference transitions across turns not only demands intensive domain expertise and contextual consistency maintenance for annotators (termed extbf{``Annotating Disaster''}) but also complicates model training due to error propagation in sequential dependency learning. Inspired by the observation that multi-turn preference extraction can be decomposed into iterative executions of one-turn extraction processes. We propose a novel dialogue data generation framework named extbf{IterChat}. First, we construct a new data format that categorizes the dialogue data into attributed historical preferences and one-turn dialogues. This reduces the probability of annotation errors and improves annotation efficiency. Then, to generate a high-quality and diverse dialogue dataset, we adopt GPT4 to pre-define the preference slots in the target preference extractor task and then randomly sample the subset of the slots and their corresponding schema values to create the dialogue datasets. Experimental results indicate that fine-tuning or only few-shot prompting with the new dialogue format yields superior performance compared to the original multi-turn dialogues. Additionally, the new data format improves annotator efficiency with a win rate of 28.4% higher than the original multi-turn dialogues.
Problem

Research questions and friction points this paper is trying to address.

Difficulty in obtaining high-quality labeled multi-turn dialogue data
Challenges in tracking user preference transitions across turns
Error propagation in sequential dependency learning complicates model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes multi-turn extraction into one-turn iterations
Introduces IterChat for efficient dialogue data generation
Uses GPT4 to pre-define and sample preference slots
🔎 Similar Papers
No similar papers found.
C
Cheng Wang
Huawei Technologies Co., Ltd.
Z
Ziru Liu
Huawei Noah’s Ark Lab
P
Pengcheng Tang
Huawei Technologies Co., Ltd.
M
Mingyu Zhang
Huawei Technologies Co., Ltd.
Q
Quanyu Dai
Huawei Noah’s Ark Lab
Yue Zhu
Yue Zhu
IBM Research
Performance OptimizationI/OStorageCloud