Extracting Training Dialogue Data from Large Language Model based Task Bots

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates privacy risks in task-oriented dialogue systems powered by large language models (LLMs), which may memorize and inadvertently leak sensitive information—such as phone numbers or travel itineraries—from their training data. The study systematically evaluates existing training data extraction attacks, revealing how the unique characteristics of task-oriented dialogue modeling influence memorization behavior. To address this, the authors propose a tailored attack framework that integrates optimized response sampling with membership inference techniques. Experimental results demonstrate the successful extraction of thousands of state labels from training dialogues, achieving extraction accuracy exceeding 70%. This represents the first quantitative analysis of data memorization mechanisms and key influencing factors in LLMs within task-oriented dialogue settings, providing critical insights for assessing and mitigating associated privacy risks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have been widely adopted to enhance Task-Oriented Dialogue Systems (TODS) by modeling complex language patterns and delivering contextually appropriate responses. However, this integration introduces significant privacy risks, as LLMs, functioning as soft knowledge bases that compress extensive training data into rich knowledge representations, can inadvertently memorize training dialogue data containing not only identifiable information such as phone numbers but also entire dialogue-level events like complete travel schedules. Despite the critical nature of this privacy concern, how LLM memorization is inherited in developing task bots remains unexplored. In this work, we address this gap through a systematic quantitative study that involves evaluating existing training data extraction attacks, analyzing key characteristics of task-oriented dialogue modeling that render existing methods ineffective, and proposing novel attack techniques tailored for LLM-based TODS that enhance both response sampling and membership inference. Experimental results demonstrate the effectiveness of our proposed data extraction attack. Our method can extract thousands of training labels of dialogue states with best-case precision exceeding 70%. Furthermore, we provide an in-depth analysis of training data memorization in LLM-based TODS by identifying and quantifying key influencing factors and discussing targeted mitigation strategies.
Problem

Research questions and friction points this paper is trying to address.

privacy risk
training data extraction
LLM memorization
task-oriented dialogue systems
data leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

training data extraction
LLM memorization
task-oriented dialogue systems
privacy attack
membership inference
🔎 Similar Papers
No similar papers found.
Shuo Zhang
Shuo Zhang
Xi’an Jiaotong University
Natural Language Processing
Junzhou Zhao
Junzhou Zhao
Xi'an Jiaotong University
algorithmsgraph datadata streamlearning
J
Junji Hou
MOE Key Laboratory for Intelligent Networks and Network Security, Xi’an Jiaotong University, P.O. Box 1088, No. 28, Xianning West Road, Xi’an, Shaanxi 710049, China
Pinghui Wang
Pinghui Wang
Xi'an Jiaotong University
Chenxu Wang
Chenxu Wang
University of Science and Technology of China
J
Jing Tao
MOE Key Laboratory for Intelligent Networks and Network Security, Xi’an Jiaotong University, P.O. Box 1088, No. 28, Xianning West Road, Xi’an, Shaanxi 710049, China