Balcony: A Lightweight Approach to Dynamic Inference of Generative Language Models

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing computational cost and latency constraints in large language model (LLM) deployment, this paper proposes Balcony, a depth-based dynamic inference framework. Balcony inserts lightweight, trainable slot-style Transformer exit layers between specified frozen layers of a pretrained backbone (e.g., LLaMA3-8B) and employs self-distillation loss to align outputs of each exit submodel with those of the full model. It achieves fine-grained resource adaptation using only 0.2% of the original pretraining data and negligible additional parameters. On multiple benchmarks, Balcony significantly outperforms state-of-the-art methods—including Flextron and LayerSkip—delivering substantial inference speedup with negligible performance degradation. To our knowledge, it is the first approach to simultaneously achieve high hardware efficiency, minimal accuracy loss, and strong deployment flexibility.

Technology Category

Application Category

📝 Abstract
Deploying large language models (LLMs) in real-world applications is often hindered by strict computational and latency constraints. While dynamic inference offers the flexibility to adjust model behavior based on varying resource budgets, existing methods are frequently limited by hardware inefficiencies or performance degradation. In this paper, we introduce Balcony, a simple yet highly effective framework for depth-based dynamic inference. By freezing the pretrained LLM and inserting additional transformer layers at selected exit points, Balcony maintains the full model's performance while enabling real-time adaptation to different computational budgets. These additional layers are trained using a straightforward self-distillation loss, aligning the sub-model outputs with those of the full model. This approach requires significantly fewer training tokens and tunable parameters, drastically reducing computational costs compared to prior methods. When applied to the LLaMA3-8B model, using only 0.2% of the original pretraining data, Balcony achieves minimal performance degradation while enabling significant speedups. Remarkably, we show that Balcony outperforms state-of-the-art methods such as Flextron and Layerskip as well as other leading compression techniques on multiple models and at various scales, across a variety of benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Dynamic inference for LLMs under computational constraints
Reducing training tokens and parameters for efficiency
Maintaining model performance while enabling real-time adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Depth-based dynamic inference framework
Freezes pretrained LLM, adds transformer layers
Self-distillation loss for minimal performance degradation
🔎 Similar Papers
No similar papers found.