π€ AI Summary
To address the prohibitively high communication overhead of intermediate activations in Split Learning (SL) with Vision Transformers (ViTs), this paper proposes an attention-driven dual-compression framework. Our method uniquely integrates class-agnostic attention similarity aggregation with token importance filtering to jointly compress both forward activations and backward gradients in a natural, end-to-end manner: we cluster and merge sample-level activations based on average attention scores from the clientβs final ViT layer, while dynamically pruning low-contribution tokens. Crucially, no additional hyperparameter tuning or gradient approximation is required. Experiments on ImageNet and other benchmarks demonstrate that our approach reduces communication cost by up to 42% over state-of-the-art SL methods, while preserving model accuracy with negligible degradation (<0.5% drop). This establishes a new paradigm for efficient and privacy-preserving distributed visual learning.
π Abstract
This paper proposes a novel communication-efficient Split Learning (SL) framework, named Attention-based Double Compression (ADC), which reduces the communication overhead required for transmitting intermediate Vision Transformers activations during the SL training process. ADC incorporates two parallel compression strategies. The first one merges samples' activations that are similar, based on the average attention score calculated in the last client layer; this strategy is class-agnostic, meaning that it can also merge samples having different classes, without losing generalization ability nor decreasing final results. The second strategy follows the first and discards the least meaningful tokens, further reducing the communication cost. Combining these strategies not only allows for sending less during the forward pass, but also the gradients are naturally compressed, allowing the whole model to be trained without additional tuning or approximations of the gradients. Simulation results demonstrate that Attention-based Double Compression outperforms state-of-the-art SL frameworks by significantly reducing communication overheads while maintaining high accuracy.