π€ AI Summary
This work addresses the challenge of deploying Vision Transformers (ViTs) on resource-constrained edge devices, where high computational costs hinder local inference, while full offloading to the cloud incurs significant latency. To bridge this gap, the authors propose a collaborative inference framework that deploys a lightweight, general-purpose ViT on the edge and multiple medium-scale expert ViTs at near-edge servers. A dynamic routing mechanism, guided by Top-k predictions, activates only the most relevant expert when the edge modelβs confidence is low. Furthermore, a progressive expert training strategy enhances specialization. Experiments on CIFAR-100 demonstrate that the approach improves accuracy by 4.12% for the activated expert subset and by 2.76% overall, while reducing latency by 45% compared to pure edge execution and cutting energy consumption by 46% relative to offloading exclusively to near-edge servers.
π Abstract
Deploying Vision Transformers on edge devices is challenging due to their high computational complexity, while full offloading to cloud resources presents significant latency overheads. We propose a novel collaborative inference framework, which orchestrates a lightweight generalist ViT on an edge device and multiple medium-sized expert ViTs on a near-edge accelerator. A novel routing mechanism uses the edge model's Top-$\mathit{k}$ predictions to dynamically select the most relevant expert for samples with low confidence. We further design a progressive specialist training strategy to enhance expert accuracy on dataset subsets. Extensive experiments on the CIFAR-100 dataset using a real-world edge and near-edge testbed demonstrate the superiority of our framework. Specifically, the proposed training strategy improves expert specialization accuracy by 4.12% on target subsets and enhances overall accuracy by 2.76% over static experts. Moreover, our method reduces latency by up to 45% compared to edge execution, and energy consumption by up to 46% compared to just near-edge offload.