SALT: A Lightweight Model Adaptation Method for Closed Split Computing Environments

πŸ“… 2025-06-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing model adaptation methods fail in split computing scenarios where both the head and tail networks are fully black-boxβ€”i.e., inaccessible for parameter modification or architectural inspection. Method: We propose SALT, a lightweight edge-side adaptation framework that inserts trainable micro-adapters only at the client side to perform fine-grained calibration of hidden features from the head network’s output, without altering the original model parameters or requiring architecture knowledge or parameter transmission. Contribution/Results: SALT is the first method enabling parameter-free, low-overhead, packet-loss-robust incremental adaptation under black-box split architectures. With merely 0.3M parameters for deployment, it achieves lower communication cost and higher resilience than prior approaches. On CIFAR-10/100, SALT surpasses full fine-tuning baselines in classification accuracy while reducing training latency by 47%, significantly enhancing practicality and deployment efficiency for personalized edge AI inference.

Technology Category

Application Category

πŸ“ Abstract
We propose SALT (Split-Adaptive Lightweight Tuning), a lightweight model adaptation framework for Split Computing under closed constraints, where the head and tail networks are proprietary and inaccessible to users. In such closed environments, conventional adaptation methods are infeasible since they require access to model parameters or architectures. SALT addresses this challenge by introducing a compact, trainable adapter on the client side to refine latent features from the head network, enabling user-specific adaptation without modifying the original models or increasing communication overhead. We evaluate SALT on user-specific classification tasks with CIFAR-10 and CIFAR-100, demonstrating improved accuracy with lower training latency compared to fine-tuning methods. Furthermore, SALT facilitates model adaptation for robust inference over lossy networks, a common challenge in edge-cloud environments. With minimal deployment overhead, SALT offers a practical solution for personalized inference in edge AI systems under strict system constraints.
Problem

Research questions and friction points this paper is trying to address.

Enables lightweight adaptation in closed split computing systems
Refines latent features without modifying proprietary models
Improves accuracy with low latency in edge AI environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Client-side lightweight adapter for feature refinement
No modification to original models required
Low training latency with improved accuracy
πŸ”Ž Similar Papers
No similar papers found.