PromptTuner: SLO-Aware Elastic System for LLM Prompt Tuning

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of managing deep learning resources efficiently while simultaneously meeting service-level objectives (SLOs) for large language model (LLM) prompt tuning and controlling resource costs. The authors propose an SLO-aware elastic scheduling system that accelerates tuning convergence through a Prompt Bank and enables dynamic resource allocation via an SLO-aware Workload Scheduler. Experimental results demonstrate that, under strict SLO compliance, the proposed approach significantly reduces both SLO violations and resource expenditure. Specifically, it achieves 4.0× and 7.9× lower SLO violation rates compared to INFless and ElasticFlow, respectively, while reducing resource costs by 1.6× and 4.5× against the same baselines.

Technology Category

Application Category

📝 Abstract
Prompt tuning has become a prominent strategy for enhancing the performance of Large Language Models (LLMs) on downstream tasks. Many IT enterprises now offer Prompt-Tuning-as-a-Service to fulfill the growing demand for prompt tuning LLMs on downstream tasks. Their primary objective is to satisfy users Service Level Objectives (SLOs) while reducing resource provisioning costs. Nevertheless, our characterization analysis for existing deep learning resource management systems reveals that they are insufficient to optimize these objectives for LLM prompt tuning workloads. In this paper, we introduce PromptTuner, an SLO-aware elastic system to optimize LLM prompt tuning. It contains two innovations. (1) We design a Prompt Bank to identify efficient initial prompts to expedite the convergence of prompt tuning. (2) We develop aWorkload Scheduler to enable fast resource allocation to reduce the SLO violation and resource costs. In our evaluation, PromptTuner reduces SLO violations by 4.0x and 7.9x, and lowers costs by 1.6x and 4.5x, compared to INFless and ElasticFlow respectively.
Problem

Research questions and friction points this paper is trying to address.

Prompt Tuning
Large Language Models
Service Level Objectives
Resource Management
Elastic System
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Tuning
SLO-aware
Elastic System
Prompt Bank
Workload Scheduler
🔎 Similar Papers
No similar papers found.