Ask the Expert: Collaborative Inference for Vision Transformers with Near-Edge Accelerators

πŸ“… 2026-02-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of deploying Vision Transformers (ViTs) on resource-constrained edge devices, where high computational costs hinder local inference, while full offloading to the cloud incurs significant latency. To bridge this gap, the authors propose a collaborative inference framework that deploys a lightweight, general-purpose ViT on the edge and multiple medium-scale expert ViTs at near-edge servers. A dynamic routing mechanism, guided by Top-k predictions, activates only the most relevant expert when the edge model’s confidence is low. Furthermore, a progressive expert training strategy enhances specialization. Experiments on CIFAR-100 demonstrate that the approach improves accuracy by 4.12% for the activated expert subset and by 2.76% overall, while reducing latency by 45% compared to pure edge execution and cutting energy consumption by 46% relative to offloading exclusively to near-edge servers.

Technology Category

Application Category

πŸ“ Abstract
Deploying Vision Transformers on edge devices is challenging due to their high computational complexity, while full offloading to cloud resources presents significant latency overheads. We propose a novel collaborative inference framework, which orchestrates a lightweight generalist ViT on an edge device and multiple medium-sized expert ViTs on a near-edge accelerator. A novel routing mechanism uses the edge model's Top-$\mathit{k}$ predictions to dynamically select the most relevant expert for samples with low confidence. We further design a progressive specialist training strategy to enhance expert accuracy on dataset subsets. Extensive experiments on the CIFAR-100 dataset using a real-world edge and near-edge testbed demonstrate the superiority of our framework. Specifically, the proposed training strategy improves expert specialization accuracy by 4.12% on target subsets and enhances overall accuracy by 2.76% over static experts. Moreover, our method reduces latency by up to 45% compared to edge execution, and energy consumption by up to 46% compared to just near-edge offload.
Problem

Research questions and friction points this paper is trying to address.

Vision Transformers
edge computing
computational complexity
latency overhead
collaborative inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

collaborative inference
Vision Transformers
dynamic routing
progressive specialist training
edge-near-edge computing
πŸ”Ž Similar Papers
No similar papers found.
Hao Liu
Hao Liu
University of Electronic Science and Technology of China
RISstacked intelligent metasurfaceDRL
S
Suhaib A. Fahmy
CEMSE Division, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia