Distillation Quantification for Large Language Models

📅 2025-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical issue of model homogenization in large language model (LLM) knowledge distillation, which undermines small models’ generalization capability and identity uniqueness. We propose the first systematic framework for quantifying “distillation degree”—a novel metric capturing the extent to which distilled models lose their intrinsic identity and converge toward uniform behavior. Methodologically, we introduce a multi-granularity distillation-degree paradigm integrating semantic consistency checking, cross-model response embedding similarity computation, and identity-attribute deviation analysis—jointly modeling identity-cognition conflicts and response homogenization. Empirical results reveal significantly lower distillation degrees in closed-source models (Claude, Gemini, Doubao); base models exhibit greater susceptibility to homogenization than aligned models; and all code and datasets are publicly released. This work establishes a reproducible, principled evaluation benchmark for controllable knowledge transfer and model diversity preservation.

Technology Category

Application Category

📝 Abstract
Model distillation is a technique for transferring knowledge from large language models (LLMs) to smaller ones, aiming to create resource-efficient yet high-performing models. However, excessive distillation can lead to homogenization, reducing diversity among models and impairing their ability to robustly handle complex or novel tasks. These limitations underscore the need to systematically quantify the distillation process and its impact. In this work, we propose a framework to evaluate and quantify model distillation. Our method addresses two key aspects: (1) Identifying identity cognition contradictions to assess discrepancies in how models perceive and represent identity-related information, and (2) Analyzing multi-granularity response similarities across models to measure the extent of homogenization. Experimental results demonstrate two key insights: (1) Well-known closed-source and open-source LLMs usually exhibit high distillation degrees, except for Claude, Doubao, and Gemini. (2) Base LLMs show higher distillation degrees compared to aligned LLMs. By offering a systematic approach to improve the transparency of LLM data distillation, we call for LLMs with more independent development and more transparent technical reports to improve LLMs' robustness and safety. The code and data are available under https://github.com/Aegis1863/LLMs-Distillation-Quantification.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Distillation
Language Models
Efficiency and Uniqueness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficiency-Uniqueness Balance
Identity Information Consistency
Model Response Similarity
🔎 Similar Papers
No similar papers found.
S
Sunbowen Lee
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Junting Zhou
Junting Zhou
Peking University
Large Language ModelAI for ScienceBioinformatics
C
Chang Ao
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
K
Kaige Li
Leibowitz AI
Xinrun Du
Xinrun Du
Multimodal Art Projection Research Community, 01.ai
LLM
S
Sirui He
Leibowitz AI
J
Jiaheng Liu
Min Yang
Min Yang
Bytedance
Vision Language ModelComputer VisionVideo Understanding
Zhoufutu Wen
Zhoufutu Wen
ByteDance SEED
LLM Evaluation
S
Shiwen Ni
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences