🤖 AI Summary
To address the growing demand for AI inference tasks in edge-cloud environments amid constrained hardware resources, this paper proposes a task-classification-based parallel inference framework. The framework classifies incoming requests according to latency sensitivity, invocation frequency, and GPU resource requirements, enabling coordinated request-level and service-level scheduling. It comprises three core components: a task-classification-aware parallel allocator, a distributed request processor, and a state-aware scheduler—collectively supporting dynamic resource adaptation for heterogeneous AI workloads, including large language models (LLMs) and image segmentation. Evaluated on a real-world edge testbed, the framework achieves up to a 2.1× improvement in effective throughput over mainstream baselines, while maintaining strong task adaptability and stringent latency guarantees.
📝 Abstract
With the increasing adoption of AI applications such as large language models and computer vision AI, the computational demands on AI inference systems are continuously rising, making the enhancement of task processing capacity using existing hardware a primary objective in edge clouds. We propose EPARA, an end-to-end AI parallel inference framework in edge, aimed at enhancing the edge AI serving capability. Our key idea is to categorize tasks based on their sensitivity to latency/frequency and requirement for GPU resources, thereby achieving both request-level and service-level task-resource allocation. EPARA consists of three core components: 1) a task-categorized parallelism allocator that decides the parallel mode of each task, 2) a distributed request handler that performs the calculation for the specific request, and 3) a state-aware scheduler that periodically updates service placement in edge clouds. We implement a EPARA prototype and conduct a case study on the EPARA operation for LLMs and segmentation tasks. Evaluation through testbed experiments involving edge servers, embedded devices, and microcomputers shows that EPARA achieves up to 2.1$ imes$ higher goodput in production workloads compared to prior frameworks, while adapting to various edge AI inference tasks.