SL-ACC: A Communication-Efficient Split Learning Framework with Adaptive Channel-wise Compression

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the communication bottleneck in split learning (SL) caused by transmitting massive smashed data (activations and gradients) from edge devices, this paper proposes SL-ACC, an efficient grouped compression framework. Methodologically, SL-ACC first quantifies channel importance via Shannon entropy for fine-grained contribution assessment; it then introduces adaptive channel grouping coupled with intra-group differential compression to minimize transmission overhead while preserving model accuracy. Extensive experiments across multiple benchmark datasets demonstrate that SL-ACC significantly accelerates convergence—reducing training time to reach target accuracy compared to state-of-the-art methods—while cutting communication volume by up to 68%. Crucially, accuracy degradation remains bounded at less than 0.5%, confirming robustness. SL-ACC thus delivers a cost-effective, communication-efficient solution for distributed collaborative training under resource-constrained edge environments.

Technology Category

Application Category

📝 Abstract
The increasing complexity of neural networks poses a significant barrier to the deployment of distributed machine learning (ML) on resource-constrained devices, such as federated learning (FL). Split learning (SL) offers a promising solution by offloading the primary computing load from edge devices to a server via model partitioning. However, as the number of participating devices increases, the transmission of excessive smashed data (i.e., activations and gradients) becomes a major bottleneck for SL, slowing down the model training. To tackle this challenge, we propose a communication-efficient SL framework, named SL-ACC, which comprises two key components: adaptive channel importance identification (ACII) and channel grouping compression (CGC). ACII first identifies the contribution of each channel in the smashed data to model training using Shannon entropy. Following this, CGC groups the channels based on their entropy and performs group-wise adaptive compression to shrink the transmission volume without compromising training accuracy. Extensive experiments across various datasets validate that our proposed SL-ACC framework takes considerably less time to achieve a target accuracy than state-of-the-art benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Reduces communication bottleneck in split learning from excessive data transmission
Compresses activation and gradient data adaptively without accuracy loss
Accelerates distributed neural network training on resource-constrained devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive channel importance identification using entropy
Channel grouping compression for smashed data
Reduces transmission volume without accuracy loss
🔎 Similar Papers
No similar papers found.
Z
Zehang Lin
School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, 361000, China
Z
Zheng Lin
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong
M
Miao Yang
School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, 361000, China
J
Jianhao Huang
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong
Y
Yuxin Zhang
Institute of Space Internet, Fudan University, Shanghai 200438, China
Z
Zihan Fang
Institute of Space Internet, Fudan University, Shanghai 200438, China
Xia Du
Xia Du
Xiamen University of Technology
adversarial machine learning
Z
Zhe Chen
Institute of Space Internet, Fudan University, Shanghai 200438, China
S
Shunzhi Zhu
School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, 361000, China
Wei Ni
Wei Ni
FIEEE, AAIA Fellow, Senior Principal Scientist & Conjoint Professor, CSIRO/UNSW
6G security and privacyconnected and trusted intelligenceapplied AI/ML