Towards Privacy-Preserving Split Learning: Destabilizing Adversarial Inference and Reconstruction Attacks in the Cloud

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address dual privacy threats—forward attribute inference and backward feature reconstruction—in split learning for edge-cloud collaborative inference, this paper proposes a plug-and-play privacy-enhancing mechanism. The method jointly leverages class activation map (CAM)-guided feature selection and a lightweight autoencoder-driven perturbative reconstruction, enabling stronger privacy protection at earlier model partition points. Compared to baselines such as PCA, our approach significantly degrades attacker performance: average PSNR of reconstructed features drops by 32.7%, and attribute inference accuracy decreases by 41.5%; meanwhile, edge-side computational overhead is reduced by 38.2% in FLOPs. The core contribution lies in the first integration of CAM and autoencoders for privacy-utility trade-off optimization in split learning—enabling flexible deployment without reliance on trusted third parties.

Technology Category

Application Category

📝 Abstract
This work aims to provide both privacy and utility within a split learning framework while considering both forward attribute inference and backward reconstruction attacks. To address this, a novel approach has been proposed, which makes use of class activation maps and autoencoders as a plug-in strategy aiming to increase the user's privacy and destabilize an adversary. The proposed approach is compared with a dimensionality-reduction-based plug-in strategy, which makes use of principal component analysis to transform the feature map onto a lower-dimensional feature space. Our work shows that our proposed autoencoder-based approach is preferred as it can provide protection at an earlier split position over the tested architectures in our setting, and, hence, better utility for resource-constrained devices in edge-cloud collaborative inference (EC) systems.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy and utility in split learning frameworks.
Mitigate forward attribute inference and backward reconstruction attacks.
Protect resource-constrained devices in edge-cloud collaborative systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses class activation maps for privacy
Employs autoencoders to destabilize attacks
Compares with PCA for dimensionality reduction
🔎 Similar Papers
2023-10-16Network and Distributed System Security SymposiumCitations: 7