Client-Cooperative Split Learning

πŸ“… 2026-03-09
πŸ›οΈ IEEE Transactions on Services Computing
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes CliCooper, a framework designed for resource-constrained and partially trusted multi-client collaborative training scenarios, which simultaneously ensures data and label privacy, verifiable training integrity, and model copyright protection. CliCooper integrates split learning with differential privacy–based activation protection and a novel secret label obfuscation mechanism. It further introduces a dynamic chained watermarking scheme that enables cross-stage integrity verification and model provenance tracing. Experimental results demonstrate that the framework incurs no significant drop in model accuracy while effectively mitigating privacy threats: the success rate of clustering attacks drops to 0%, data reconstruction similarity decreases from 0.50 to 0.03, and the accuracy of model extraction attacks falls to near-random levels (approximately 1%).

Technology Category

Application Category

πŸ“ Abstract
Model training is increasingly offered as a service for resource-constrained data owners to build customized models. Split Learning (SL) enables such services by offloading training computation under privacy constraints, and evolves toward serverless and multi-client settings where model segments are distributed across training clients. This cooperative mode assumes partial trust: data owners hide labels and data from trainer clients, while trainer clients produce verifiable training artifacts and ownership proofs. We present CliCooper, a multi-client cooperative SL framework tailored for cooperative model training services in heterogeneous and partially trusted environments, where one client contributes data, while others collectively act as SL trainers. CliCooper bridges the privacy and trust gaps through two new designs. First, differential privacy-based activation protection and secret label obfuscation safeguard data owners'privacy without degrading model performance. Second, a dynamic chained watermarking scheme cryptographically links training stages on model segments across trainers, ensuring verifiable training integrity, robust model provenance, and copyright protection. Experiments show that CliCooper preserves model accuracy while enhancing resilience to privacy and ownership attacks. It reduces the success rate of clustering attacks (which infer label groups from intermediate activation) to 0%, decreases inversion-reconstruction (which recovers training data) similarity from 0.50 to 0.03, and limits model-extraction-based surrogates to about 1% accuracy, comparable to random guessing.
Problem

Research questions and friction points this paper is trying to address.

Split Learning
Privacy Preservation
Trustworthy Collaboration
Model Provenance
Differential Privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Split Learning
Differential Privacy
Dynamic Chained Watermarking
Model Provenance
Privacy-Preserving Training
πŸ”Ž Similar Papers