Symbiosis: Multi-Adapter Inference and Fine-Tuning

๐Ÿ“… 2025-07-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing PEFT service frameworks suffer from four key bottlenecks: (1) task-level model instance redundancy, leading to low GPU utilization; (2) lack of support for heterogeneous adapter co-deployment and independent resource management; (3) strict isolation between inference and fine-tuning resources, preventing shared base-model reuse; and (4) absence of privacy-preserving mechanisms for user-specific fine-tuning parameters. This paper introduces Symbiosisโ€”the first service-oriented framework enabling collaborative multi-adapter inference and fine-tuning. Its core innovations are: (1) freezing and sharing the base model layers across tasks to enable cross-instance reuse; (2) introducing split execution, decoupling client-side adapters from the server-side base model to support mixed PEFT method deployment, fine-grained resource isolation, and end-to-end parameter privacy; and (3) full compatibility with Hugging Face Transformers without architectural modifications. Experiments on Llama2-13B demonstrate that Symbiosis achieves 4ร— higher adapter fine-tuning throughput than baseline approaches under identical GPU resources, significantly improving resource efficiency and service elasticity.

Technology Category

Application Category

๐Ÿ“ Abstract
Parameter-efficient fine-tuning (PEFT) allows model builders to capture the task specific parameters into adapters, which are a fraction of the size of the original base model. Popularity of PEFT technique for fine-tuning has led to creation of a large number of adapters for popular Large Language Models (LLMs). However, existing frameworks fall short in supporting inference or fine-tuning with multiple adapters in the following ways. 1) For fine-tuning, each job needs to deploy its dedicated base model instance, which results in excessive GPU memory consumption and poor GPU utilization. 2) While popular inference platforms can serve multiple PEFT adapters, they do not allow independent resource management or mixing of different PEFT methods. 3) They cannot share resources (such as base model instance) between inference and fine-tuning jobs. 4) They do not provide privacy to users who may not wish to expose their fine-tuned parameters to service providers. In Symbiosis, we address the above problems by enabling as-a-service deployment of base model. The base model layers can be shared across multiple inference or fine-tuning processes. Our split-execution technique decouples the execution of client-specific adapters and layers from the frozen base model layers offering them flexibility to manage their resources, to select their fine-tuning method, to achieve their performance goals. Our approach is transparent to models and works out-of-the-box for most models in the transformers library. Our evaluation on Llama2-13B shows the compared to baseline, Symbiosis can fine-tune 4X more adapters on the same set of GPUs in the same amount of time.
Problem

Research questions and friction points this paper is trying to address.

Efficiently share base model across multiple fine-tuning jobs
Support independent resource management for PEFT adapters
Enable privacy-preserving fine-tuning and inference deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shared base model for multi-adapter inference and fine-tuning
Split-execution technique decouples adapter and base layers
Enables 4X more adapters on same GPUs
๐Ÿ”Ž Similar Papers
No similar papers found.
Saransh Gupta
Saransh Gupta
IBM Research, Almaden
Memory and Storage SystemsComputer ArchitectureData-Intensive Applications
U
Umesh Deshpande
IBM Research, USA
T
Travis Janssen
IBM Research, USA
S
Swami Sundararaman
IBM Research, USA