EPARA: Parallelizing Categorized AI Inference in Edge Clouds

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the growing demand for AI inference tasks in edge-cloud environments amid constrained hardware resources, this paper proposes a task-classification-based parallel inference framework. The framework classifies incoming requests according to latency sensitivity, invocation frequency, and GPU resource requirements, enabling coordinated request-level and service-level scheduling. It comprises three core components: a task-classification-aware parallel allocator, a distributed request processor, and a state-aware scheduler—collectively supporting dynamic resource adaptation for heterogeneous AI workloads, including large language models (LLMs) and image segmentation. Evaluated on a real-world edge testbed, the framework achieves up to a 2.1× improvement in effective throughput over mainstream baselines, while maintaining strong task adaptability and stringent latency guarantees.

Technology Category

Application Category

📝 Abstract
With the increasing adoption of AI applications such as large language models and computer vision AI, the computational demands on AI inference systems are continuously rising, making the enhancement of task processing capacity using existing hardware a primary objective in edge clouds. We propose EPARA, an end-to-end AI parallel inference framework in edge, aimed at enhancing the edge AI serving capability. Our key idea is to categorize tasks based on their sensitivity to latency/frequency and requirement for GPU resources, thereby achieving both request-level and service-level task-resource allocation. EPARA consists of three core components: 1) a task-categorized parallelism allocator that decides the parallel mode of each task, 2) a distributed request handler that performs the calculation for the specific request, and 3) a state-aware scheduler that periodically updates service placement in edge clouds. We implement a EPARA prototype and conduct a case study on the EPARA operation for LLMs and segmentation tasks. Evaluation through testbed experiments involving edge servers, embedded devices, and microcomputers shows that EPARA achieves up to 2.1$ imes$ higher goodput in production workloads compared to prior frameworks, while adapting to various edge AI inference tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing AI inference task processing capacity in edge clouds
Optimizing task-resource allocation based on latency and GPU requirements
Improving edge AI serving capability through categorized parallelization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Categorizes tasks by latency sensitivity and GPU needs
Uses parallel allocator for task-resource matching
Employs state-aware scheduler for dynamic service placement
🔎 Similar Papers
No similar papers found.
Y
Yubo Wang
Tianjin Key Laboratory of Advanced Networking, Tianjin University
Yubo Cui
Yubo Cui
Northeastern University
3d computer visionobject trackingrobot
Tuo Shi
Tuo Shi
Institute of Microelectronics, CAS
MemristorProcessing-in-memoryBio-inspired computing
Danyang Li
Danyang Li
Shuimu Scholar, Tsinghua University
Embodied AIMobile ComputingInternet of ThingsEdge ComputingSLAM System
W
Wenxin Li
Tianjin Key Laboratory of Advanced Networking, Tianjin University
L
Lide Suo
Tianjin Key Laboratory of Advanced Networking, Tianjin University
T
Tao Wang
Tianjin Key Laboratory of Advanced Networking, Tianjin University
X
Xin Xie
Tianjin Key Laboratory of Advanced Networking, Tianjin University