🤖 AI Summary
This work investigates privacy risks in task-oriented dialogue systems powered by large language models (LLMs), which may memorize and inadvertently leak sensitive information—such as phone numbers or travel itineraries—from their training data. The study systematically evaluates existing training data extraction attacks, revealing how the unique characteristics of task-oriented dialogue modeling influence memorization behavior. To address this, the authors propose a tailored attack framework that integrates optimized response sampling with membership inference techniques. Experimental results demonstrate the successful extraction of thousands of state labels from training dialogues, achieving extraction accuracy exceeding 70%. This represents the first quantitative analysis of data memorization mechanisms and key influencing factors in LLMs within task-oriented dialogue settings, providing critical insights for assessing and mitigating associated privacy risks.
📝 Abstract
Large Language Models (LLMs) have been widely adopted to enhance Task-Oriented Dialogue Systems (TODS) by modeling complex language patterns and delivering contextually appropriate responses. However, this integration introduces significant privacy risks, as LLMs, functioning as soft knowledge bases that compress extensive training data into rich knowledge representations, can inadvertently memorize training dialogue data containing not only identifiable information such as phone numbers but also entire dialogue-level events like complete travel schedules. Despite the critical nature of this privacy concern, how LLM memorization is inherited in developing task bots remains unexplored. In this work, we address this gap through a systematic quantitative study that involves evaluating existing training data extraction attacks, analyzing key characteristics of task-oriented dialogue modeling that render existing methods ineffective, and proposing novel attack techniques tailored for LLM-based TODS that enhance both response sampling and membership inference. Experimental results demonstrate the effectiveness of our proposed data extraction attack. Our method can extract thousands of training labels of dialogue states with best-case precision exceeding 70%. Furthermore, we provide an in-depth analysis of training data memorization in LLM-based TODS by identifying and quantifying key influencing factors and discussing targeted mitigation strategies.