Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of intermediate representations in federated split learning to data reconstruction attacks, which poses a serious privacy risk. To mitigate this, the authors propose KD-UFSL, a novel approach that, for the first time, integrates k-anonymous micro-aggregation with differential privacy within a U-shaped federated split learning framework to effectively protect these intermediate features. Evaluated on four benchmark datasets, KD-UFSL significantly enhances privacy preservation: it increases the mean squared error of reconstructed images by up to 50% and reduces structural similarity by 40%, while maintaining competitive global model utility. The method thus achieves a strong balance between privacy protection and model performance.

Technology Category

Application Category

📝 Abstract
Big data scenarios, where massive, heterogeneous datasets are distributed across clients, demand scalable, privacy-preserving learning methods. Federated learning (FL) enables decentralized training of machine learning (ML) models across clients without data centralization. Decentralized training, however, introduces a computational burden on client devices. U-shaped federated split learning (UFSL) offloads a fraction of the client computation to the server while keeping both data and labels on the clients' side. However, the intermediate representations (i.e., smashed data) shared by clients with the server are prone to exposing clients' private data. To reduce exposure of client data through intermediate data representations, this work proposes k-anonymous differentially private UFSL (KD-UFSL), which leverages privacy-enhancing techniques such as microaggregation and differential privacy to minimize data leakage from the smashed data transferred to the server. We first demonstrate that an adversary can access private client data from intermediate representations via a data-reconstruction attack, and then present a privacy-enhancing solution, KD-UFSL, to mitigate this risk. Our experiments indicate that, alongside increasing the mean squared error between the actual and reconstructed images by up to 50% in some cases, KD-UFSL also decreases the structural similarity between them by up to 40% on four benchmarking datasets. More importantly, KD-UFSL improves privacy while preserving the utility of the global model. This highlights its suitability for large-scale big data applications where privacy and utility must be balanced.
Problem

Research questions and friction points this paper is trying to address.

Federated Split Learning
Intermediate Representations
Privacy Leakage
Data Reconstruction Attack
Client Privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

federated split learning
intermediate representation privacy
differential privacy
microaggregation
data reconstruction attack
🔎 Similar Papers
No similar papers found.
O
Obaidullah Zaland
Department of Computing Science, Umeå University, Umeå, SE-90187, Sweden
S
Sajib Mistry
Curtin University, Bentley WA 6102, Australia
Monowar Bhuyan
Monowar Bhuyan
Associate Professor & WASP Fellow, Umeå University, Sweden.
Machine learningAnomaly detectionSystems and AI securityDistributed systems