🤖 AI Summary
Split Learning (SL) preserves client data privacy but introduces novel security threats due to its distributed architecture. This paper first establishes a systematic, four-dimensional attack surface taxonomy—categorizing existing attacks by adversary role, privacy risk type, timing of data leakage, and vulnerability location. It then rigorously evaluates the effectiveness and limitations of prevailing defense strategies, including cryptographic techniques, data perturbation, distributed architectural modifications, and hybrid approaches. The analysis uncovers critical security gaps and proposes a unified threat model alongside a comprehensive security assessment framework that explicitly maps attack vectors across all SL learning phases. These contributions provide both theoretical foundations and practical guidelines for enhancing SL privacy security, while identifying concrete directions for future research.
📝 Abstract
Split Learning (SL) is a collaborative learning approach that improves privacy by keeping data on the client-side while sharing only the intermediate output with a server. However, the distributed nature of SL introduces new security challenges, necessitating a comprehensive exploration of potential attacks. This paper systematically reviews various attacks on SL, classifying them based on factors such as the attacker's role, the type of privacy risks, when data leaks occur, and where vulnerabilities exist. We also analyze existing defense methods, including cryptographic methods, data modification approaches, distributed techniques, and hybrid solutions. Our findings reveal security gaps, highlighting the effectiveness and limitations of existing defenses. By identifying open challenges and future directions, this work provides valuable information to improve SL privacy issues and guide further research.