Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review

📅 2024-11-07
đŸ›ïž arXiv.org
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
This study addresses the tension between privacy preservation and model interpretability in the synergistic application of federated learning (FL) and eXplainable Artificial Intelligence (XAI). Adopting a PRISMA-ScR–guided scoping review, we systematically analyze 37 studies integrating FL frameworks (e.g., PySyft, FedML) with XAI methods (e.g., LIME, SHAP). We quantitatively demonstrate, for the first time, the dilution effect of FL aggregation on local interpretability: while global explanations improve, individual-level pattern fidelity degrades—only one existing study provides quantitative evaluation. We further find that explanation-enhanced FL algorithms improve robustness against malicious clients, yet fewer than 20% of reviewed works adhere to standardized reporting practices or employ canonical FL libraries. This work fills a critical gap in the quantitative assessment of FL’s impact on XAI, clarifying key challenges—including interpretability loss during aggregation—and establishing actionable pathways for co-designing privacy-preserving and interpretable FL systems.

Technology Category

Application Category

📝 Abstract
The joint implementation of federated learning (FL) and explainable artificial intelligence (XAI) could allow training models from distributed data and explaining their inner workings while preserving essential aspects of privacy. Toward establishing the benefits and tensions associated with their interplay, this scoping review maps the publications that jointly deal with FL and XAI, focusing on publications that reported an interplay between FL and model interpretability or post-hoc explanations. Out of the 37 studies meeting our criteria, only one explicitly and quantitatively analyzed the influence of FL on model explanations, revealing a significant research gap. The aggregation of interpretability metrics across FL nodes created generalized global insights at the expense of node-specific patterns being diluted. Several studies proposed FL algorithms incorporating explanation methods to safeguard the learning process against defaulting or malicious nodes. Studies using established FL libraries or following reporting guidelines are a minority. More quantitative research and structured, transparent practices are needed to fully understand their mutual impact and under which conditions it happens.
Problem

Research questions and friction points this paper is trying to address.

Analyzing interplay between federated learning and explainable AI
Exploring FL's impact on model interpretability and explanations
Addressing research gaps in FL-XAI quantitative analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines federated learning with explainable AI
Uses interpretability metrics across distributed nodes
Incorporates explanation methods to secure learning
🔎 Similar Papers
No similar papers found.
L
L. M. Lopez-Ramos
Holistic Systems department, Simula Metropolitan Center for Digital Engineering, Oslo, Norway
Florian Leiser
Florian Leiser
Department of Economics and Management, Karlsruhe Institute of Technology, Germany
Aditya Rastogi
Aditya Rastogi
Machine Learning Engineer
Reinforcement LearningSelf-Supervised LearningArtificial General Intelligence
Steven Hicks
Steven Hicks
Senior Research Scientist at SimulaMet
Machine LearningDeep LearningExplainable AI
Inga StrĂŒmke
Inga StrĂŒmke
Norwegian University of Science and Technology
Explainable AI (XAI)Machine LearningBeyond Standard Model physicsSupersymmetry
V
V. Madai
QUEST Center for Responsible Research, Charité - UniversitÀtsmedizin Berlin, Berlin, Germany; School of Computing and Digital Technology, Birmingham City University, Birmingham, United Kingdom
T
Tobias Budig
ETH Zurich, Switzerland
A
A. Sunyaev
School of Computation, Information and Technology, Technical University of Munich, Germany
A
A. Hilbert
Charité Lab for AI in Medicine (CLAIM), Charité - UniversitÀtsmedizin Berlin, Germany