Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Rising inference costs and the accuracy-efficiency trade-off pose critical challenges for large language model (LLM) deployment. Method: We propose a conditional scaling law and a joint architecture search framework that integrates architectural priors—including hidden dimension size, MLP-to-attention ratio, and grouped-query attention (GQA)—into the Chinchilla scaling paradigm. This enables multi-variable scaling law modeling and efficient architecture evaluation under fixed training budget constraints. Contribution/Results: We uncover the synergistic impact of GQA and module-level ratios on both inference throughput and accuracy. Empirical evaluation across 200+ models demonstrates that our optimized architectures achieve up to 2.1% higher accuracy and 42% higher inference throughput than LLaMA-3.2 at identical training cost. Our work provides both theoretical foundations and practical pathways for efficient, scalable LLM design.

Technology Category

Application Category

📝 Abstract
Scaling the number of parameters and the size of training data has proven to be an effective strategy for improving large language model (LLM) performance. Yet, as these models grow increasingly powerful and widely deployed, the cost of inference has become a pressing concern. Despite its importance, the trade-off between model accuracy and inference efficiency remains underexplored. In this work, we examine how key architectural factors, hidden size, the allocation of parameters between MLP and attention (mlp-to-attention ratio), and grouped-query attention (GQA), influence both inference cost and accuracy. We introduce a conditional scaling law that augments the Chinchilla framework with architectural information, along with a search framework for identifying architectures that are simultaneously inference-efficient and accurate. To validate our approach, we train more than 200 models spanning 80M to 3B parameters and 8B to 100B training tokens, and fit the proposed conditional scaling law. Our results show that the conditional scaling law reliably predicts optimal architectural choices and that the resulting models outperform existing open-source baselines. Under the same training budget, optimized architectures achieve up to 2.1% higher accuracy and 42% greater inference throughput compared to LLaMA-3.2.
Problem

Research questions and friction points this paper is trying to address.

Optimizing model architecture for inference efficiency and accuracy
Analyzing architectural factors affecting inference cost and performance
Developing conditional scaling laws for efficient LLM design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces conditional scaling law with architectural factors
Proposes search framework for inference-efficient architectures
Optimizes hidden size, MLP-attention ratio, and GQA
🔎 Similar Papers
No similar papers found.