🤖 AI Summary
Large language models (LLMs) in federated learning (FL) pose cross-client training data memorization risks; existing detection methods focus solely on single-sample memorization, neglecting fine-grained inter-sample memorization, and centralized evaluation techniques do not directly transfer to FL. Method: We extend fine-grained cross-sample memorization assessment to FL for the first time, proposing a unified analytical framework that quantifies both intra-client and cross-client memorization. We systematically investigate the impact of decoding strategies, prefix length, training rounds, and FL algorithms on memorization behavior. Results: Experiments confirm that FL-trained LLMs indeed memorize client-specific data, with intra-client memorization significantly stronger than cross-client memorization. Key training and inference factors exert quantifiable, non-negligible effects on memorization intensity. This work establishes a novel, empirically grounded methodology for privacy risk assessment in FL, enabling principled evaluation of model memorization across heterogeneous clients.
📝 Abstract
Federated learning (FL) enables collaborative training without raw data sharing, but still risks training data memorization. Existing FL memorization detection techniques focus on one sample at a time, underestimating more subtle risks of cross-sample memorization. In contrast, recent work on centralized learning (CL) has introduced fine-grained methods to assess memorization across all samples in training data, but these assume centralized access to data and cannot be applied directly to FL. We bridge this gap by proposing a framework that quantifies both intra- and inter-client memorization in FL using fine-grained cross-sample memorization measurement across all clients. Based on this framework, we conduct two studies: (1) measuring subtle memorization across clients and (2) examining key factors that influence memorization, including decoding strategies, prefix length, and FL algorithms. Our findings reveal that FL models do memorize client data, particularly intra-client data, more than inter-client data, with memorization influenced by training and inferencing factors.