Flatter Tokens are More Valuable for Speculative Draft Model Training

πŸ“… 2026-01-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inefficiency in Speculative Decoding training caused by data redundancy, noting that not all samples contribute equally to improving acceptance rates. The authors introduce, for the first time, a "flatness" metric to quantify the flatness of the target model’s output distribution and demonstrate that samples associated with high-flatness tokens are more valuable for training. Building on this insight, they propose Sample-level Flatness-based Data Distillation (SFDD), a method that selectively retains high-value samples to significantly enhance training efficiency. Experiments show that SFDD achieves over 2Γ— training speedup using only 50% of the original data while limiting inference acceleration performance degradation to within 4%. The approach has been integrated into the EAGLE framework, offering both theoretical grounding and practical utility.

Technology Category

Application Category

πŸ“ Abstract
Speculative Decoding (SD) is a key technique for accelerating Large Language Model (LLM) inference, but it typically requires training a draft model on a large dataset. We approach this problem from a data-centric perspective, finding that not all training samples contribute equally to the SD acceptance rate. Specifically, our theoretical analysis and empirical validation reveals that tokens inducing flatter predictive distributions from the target model are more valuable than those yielding sharply peaked distributions. Based on this insight, we propose flatness, a new metric to quantify this property, and develop the Sample-level-flatness-based Dataset Distillation (SFDD) approach, which filters the training data to retain only the most valuable samples. Experiments on the EAGLE framework demonstrate that SFDD can achieve over 2$\times$ training speedup using only 50% of the data, while keeping the final model's inference speedup within 4% of the full-dataset baseline. This work introduces an effective, data-centric approach that substantially improves the training efficiency for Speculative Decoding. Our code is available at https://anonymous.4open.science/r/Flatness.
Problem

Research questions and friction points this paper is trying to address.

Speculative Decoding
training efficiency
data-centric
acceptance rate
Large Language Model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Decoding
Dataset Distillation
Flatness
Training Efficiency
Large Language Models
πŸ”Ž Similar Papers
No similar papers found.
J
Jiaming Fan
Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education; Southeast University
Daming Cao
Daming Cao
Nanjing University of Information Science and Technology
Information theory
Xiangzhong Luo
Xiangzhong Luo
Nanyang Technological University
Jiale Fu
Jiale Fu
Southeast University
speculative decodingLLM reasoning
C
Chonghan Liu
Qiyuan Tech
X
Xu Yang
Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education; Southeast University