π€ AI Summary
This work addresses the inefficiency in Speculative Decoding training caused by data redundancy, noting that not all samples contribute equally to improving acceptance rates. The authors introduce, for the first time, a "flatness" metric to quantify the flatness of the target modelβs output distribution and demonstrate that samples associated with high-flatness tokens are more valuable for training. Building on this insight, they propose Sample-level Flatness-based Data Distillation (SFDD), a method that selectively retains high-value samples to significantly enhance training efficiency. Experiments show that SFDD achieves over 2Γ training speedup using only 50% of the original data while limiting inference acceleration performance degradation to within 4%. The approach has been integrated into the EAGLE framework, offering both theoretical grounding and practical utility.
π Abstract
Speculative Decoding (SD) is a key technique for accelerating Large Language Model (LLM) inference, but it typically requires training a draft model on a large dataset. We approach this problem from a data-centric perspective, finding that not all training samples contribute equally to the SD acceptance rate. Specifically, our theoretical analysis and empirical validation reveals that tokens inducing flatter predictive distributions from the target model are more valuable than those yielding sharply peaked distributions. Based on this insight, we propose flatness, a new metric to quantify this property, and develop the Sample-level-flatness-based Dataset Distillation (SFDD) approach, which filters the training data to retain only the most valuable samples. Experiments on the EAGLE framework demonstrate that SFDD can achieve over 2$\times$ training speedup using only 50% of the data, while keeping the final model's inference speedup within 4% of the full-dataset baseline. This work introduces an effective, data-centric approach that substantially improves the training efficiency for Speculative Decoding. Our code is available at https://anonymous.4open.science/r/Flatness.