Advancing Model Refinement: Muon-Optimized Distillation and Quantization for LLM Deployment

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of deploying large language models (LLMs) on edge devices, where high computational demands, memory consumption, and energy costs hinder practicality, and where achieving an optimal trade-off between compression ratio and task performance remains difficult. To tackle this, the authors propose an integrated framework that synergistically combines GPTQ quantization, low-rank adaptation (LoRA), and task-oriented data distillation. Notably, they introduce the Muon optimizer into the quantization-aware fine-tuning pipeline for the first time, further enhanced by Bayesian hyperparameter optimization and KL divergence–driven knowledge distillation to enable joint optimization. The resulting approach achieves a 2× reduction in memory footprint—e.g., from 6 GB to 3 GB—while preserving model accuracy and significantly outperforming standard GPTQ baselines across established LLM benchmarks, thereby effectively mitigating performance degradation induced by quantization.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) enable advanced natural language processing but face deployment challenges on resource-constrained edge devices due to high computational, memory, and energy demands. Optimizing these models requires addressing three key challenges: acquiring task-specific data, fine-tuning for performance, and compressing models to accelerate inference while reducing resource demands. We propose an integrated framework combining GPTQ-based quantization, low-rank adaptation (LoRA), and a specialized data distillation process to significantly reduce model size and complexity while preserving or enhancing task-specific performance. By leveraging data distillation, knowledge distillation via Kullback-Leibler divergence, Bayesian hyperparameter optimization, and the Muon optimizer, our pipeline achieves up to 2x memory compression (e.g., reducing a 6GB model to 3GB) and enables efficient inference for specialized tasks. Empirical results demonstrate superior performance on standard LLM benchmarks compared to GPTQ quantization alone, with the Muon optimizer notably enhancing fine-tuned models'resistance to accuracy decay during quantization.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
model compression
edge deployment
quantization
resource constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Muon optimizer
GPTQ quantization
knowledge distillation
low-rank adaptation
model compression
🔎 Similar Papers
No similar papers found.
J
Jacob Sander
Intelligent Systems and Robotics (ISR), University of West Florida
Brian Jalaian
Brian Jalaian
bjalaian@uwf.edu
Deep LearningLarge Language ModelsAgentic AITrustworthy AIOptimization
V
Venkat R. Dasari
DEVCOM Army Research Laboratory (ARL)