TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron Provenance

📅 2023-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, attributing global model predictions to specific clients remains challenging; existing debugging methods lack fine-grained cross-client, cross-modal, and large-model support. To address this, we propose an explainable provenance framework grounded in neuron-wise saliency quantification, dynamic activation tracing, and client-to-global model neuron mapping. This is the first approach enabling high-precision client attribution for multimodal (image/text) data and large language models (e.g., GPT). It overcomes limitations of centralized training and domain-specific assumptions. Evaluated on six benchmark datasets—including two real-world medical imaging datasets—and four neural architectures, our method achieves 99% accuracy in identifying responsible clients. The framework significantly enhances debuggability and trustworthiness of federated learning systems while supporting heterogeneous, scalable, and clinically relevant deployments.
📝 Abstract
In Federated Learning, clients train models on local data and send updates to a central server, which aggregates them into a global model using a fusion algorithm. This collaborative yet privacy-preserving training comes at a cost. FL developers face significant challenges in attributing global model predictions to specific clients. Localizing responsible clients is a crucial step towards (a) excluding clients primarily responsible for incorrect predictions and (b) encouraging clients who contributed high-quality models to continue participating in the future. Existing ML debugging approaches are inherently inapplicable as they are designed for single-model, centralized training. We introduce TraceFL, a fine-grained neuron provenance capturing mechanism that identifies clients responsible for a global model's prediction by tracking the flow of information from individual clients to the global model. Since inference on different inputs activates a different set of neurons of the global model, TraceFL dynamically quantifies the significance of the global model's neurons in a given prediction, identifying the most crucial neurons in the global model. It then maps them to the corresponding neurons in every participating client to determine each client's contribution, ultimately localizing the responsible client. We evaluate TraceFL on six datasets, including two real-world medical imaging datasets and four neural networks, including advanced models such as GPT. TraceFL achieves 99% accuracy in localizing the responsible client in FL tasks spanning both image and text classification tasks. At a time when state-of-the-artML debugging approaches are mostly domain-specific (e.g., image classification only), TraceFL is the first technique to enable highly accurate automated reasoning across a wide range of FL applications.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Contribution Quantification
Model Debugging
Innovation

Methods, ideas, or system contributions that make the work stand out.

TraceFL
Neuron-level Information Flow
High Precision Identification
🔎 Similar Papers
No similar papers found.