PRISM: Distributed Inference for Foundation Models at Edge

πŸ“… 2025-07-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the prohibitively high communication and computational overhead of distributed inference for foundation models (e.g., ViT, BERT, GPT-2) on resource-constrained edge devices, this paper proposes PRISMβ€”a partition-aware inference framework. Methodologically, PRISM introduces three key innovations: (1) Segment Means, a lightweight intermediate feature compression scheme that eliminates redundancy in Key/Value representations; (2) a restructured self-attention mechanism enabling block-wise computation and position-level partitioning; and (3) partition-aware causal masking, specifically designed for autoregressive generation. Extensive experiments across multiple datasets demonstrate that PRISM reduces inter-device communication by up to 99.2%, decreases per-device computation by 51.24%, and incurs only negligible accuracy degradation. These results significantly enhance both inference efficiency and scalability of large language and vision models in edge environments.

Technology Category

Application Category

πŸ“ Abstract
Foundation models (FMs) have achieved remarkable success across a wide range of applications, from image classification to natural langurage processing, but pose significant challenges for deployment at edge. This has sparked growing interest in developing practical and efficient strategies for bringing foundation models to edge environments. In this work, we propose PRISM, a communication-efficient and compute-aware strategy for distributed Transformer inference on edge devices. Our method leverages a Segment Means representation to approximate intermediate output features, drastically reducing inter-device communication. Additionally, we restructure the self-attention mechanism to eliminate redundant computations caused by per-device Key/Value calculation in position-wise partitioning and design a partition-aware causal masking scheme tailored for autoregressive models. We evaluate PRISM on ViT, BERT, and GPT-2 across diverse datasets, namely CIFAR-10, CIFAR-100, ImageNet-1k, GLUE, and CBT. Our results demonstrate substantial reductions in communication overhead (up to 99.2% for BERT at compression rate CR = 128) and per-device computation (51.24% for BERT at the same setting), with only minor accuracy degradation. This method offers a scalable and practical solution for deploying foundation models in distributed resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Efficient distributed inference for foundation models at edge
Reducing communication overhead in edge device deployments
Optimizing computation for Transformer models on edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Segment Means representation reduces communication overhead
Restructured self-attention eliminates redundant computations
Partition-aware causal masking for autoregressive models
πŸ”Ž Similar Papers
No similar papers found.
M
Muhammad Azlan Qazi
Department of Electrical and Computer Engineering, Aarhus University, Denmark
Alexandros Iosifidis
Alexandros Iosifidis
Professor, Dept. of Computing Sciences, Tampere University
Computational IntelligenceMachine LearningMachine PerceptionFinancial Data Analytics
Q
Qi Zhang
Department of Electrical and Computer Engineering, Aarhus University, Denmark