Orchestration for Domain-specific Edge-Cloud Language Models

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the neglect of inter-component interactions and the challenge of jointly optimizing latency, cost, and accuracy under dynamic SLO constraints in edge-cloud collaborative LLM serving, this paper proposes ECO-LLM. Methodologically, it models the LLM service pipeline as an end-to-end joint optimization problem and introduces two key innovations: query clustering and Pareto-optimal path selection—enabling both intra-domain configuration exploration and runtime adaptive scheduling. By integrating performance metric modeling with an edge-cloud cooperative inference architecture, ECO-LLM achieves fine-grained, query-level component configuration optimization. Experimental evaluation in smart home and autonomous vehicle scenarios demonstrates that ECO-LLM reduces cost by 90% and latency by 55% while improving accuracy to 90%, compared to GPT-4o. For unseen queries, it achieves an average 62% improvement in either cost reduction or response speed over state-of-the-art routing methods, while strictly satisfying SLO constraints.

Technology Category

Application Category

📝 Abstract
The remarkable performance of Large Language Models (LLMs) has inspired many applications, which often necessitate edge-cloud collaboration due to connectivity, privacy, and cost considerations. Traditional methods primarily focus on selecting the best LLM model for optimizing performance, while neglecting the critical interplay between the components of the LLM serving pipeline (context retrieval, query preprocessing, etc.) or the changing latency and cost constraints. We introduce ECO-LLM (Edge-Cloud Orchestrator for LLMs), a novel system that reframes this problem as a joint optimization challenge and solves it by systematically exploring component configurations and dynamically selecting optimal strategies at the query level. ECO-LLM consists of two components: (1) the ECO-LLM Emulator, which efficiently explores the vast configuration space utilizing query clustering and pareto-optimal path selection, gathering domain-specific performance metrics without exhaustive evaluation; and (2) the ECO-LLM Runtime, which leverages these metrics to dynamically select optimal resolution strategies for user queries while meeting user-defined Service Level Objectives (SLOs). We evaluate ECO-LLM on a smart home and a smart car assistant scenarios. With an exhaustive exploration of all possible configurations for seen queries, ECO-LLM outperforms cloud-based models like GPT-4o in terms of accuracy (90% vs. 74% on average) while reducing costs by 90% and latency by 55%, demonstrating the value of its joint optimization at the query level. In practical deployment for previously unseen queries, ECO-LLM selects configurations that reduce costs by 62% or improve response times by 62% on average compared to state-of-the-art model routing approaches, while maintaining higher accuracy and consistently adhering to specified latency and cost constraints.
Problem

Research questions and friction points this paper is trying to address.

Optimizing edge-cloud LLM collaboration for cost and latency
Jointly configuring LLM pipeline components for domain-specific needs
Dynamic query-level strategy selection meeting SLO constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint optimization of LLM components dynamically
Query clustering and pareto-optimal path selection
Dynamic strategy selection meeting user SLOs
🔎 Similar Papers
No similar papers found.