π€ AI Summary
To address the prohibitively high communication and computational overhead of distributed inference for foundation models (e.g., ViT, BERT, GPT-2) on resource-constrained edge devices, this paper proposes PRISMβa partition-aware inference framework. Methodologically, PRISM introduces three key innovations: (1) Segment Means, a lightweight intermediate feature compression scheme that eliminates redundancy in Key/Value representations; (2) a restructured self-attention mechanism enabling block-wise computation and position-level partitioning; and (3) partition-aware causal masking, specifically designed for autoregressive generation. Extensive experiments across multiple datasets demonstrate that PRISM reduces inter-device communication by up to 99.2%, decreases per-device computation by 51.24%, and incurs only negligible accuracy degradation. These results significantly enhance both inference efficiency and scalability of large language and vision models in edge environments.
π Abstract
Foundation models (FMs) have achieved remarkable success across a wide range of applications, from image classification to natural langurage processing, but pose significant challenges for deployment at edge. This has sparked growing interest in developing practical and efficient strategies for bringing foundation models to edge environments. In this work, we propose PRISM, a communication-efficient and compute-aware strategy for distributed Transformer inference on edge devices. Our method leverages a Segment Means representation to approximate intermediate output features, drastically reducing inter-device communication. Additionally, we restructure the self-attention mechanism to eliminate redundant computations caused by per-device Key/Value calculation in position-wise partitioning and design a partition-aware causal masking scheme tailored for autoregressive models. We evaluate PRISM on ViT, BERT, and GPT-2 across diverse datasets, namely CIFAR-10, CIFAR-100, ImageNet-1k, GLUE, and CBT. Our results demonstrate substantial reductions in communication overhead (up to 99.2% for BERT at compression rate CR = 128) and per-device computation (51.24% for BERT at the same setting), with only minor accuracy degradation. This method offers a scalable and practical solution for deploying foundation models in distributed resource-constrained environments.