Enhancing Token Filtering Efficiency in Large Language Model Training with Collider

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low token filtering efficiency in large language model training—particularly during later training stages and large-scale matrix multiplication operations—this work proposes a full-layer activation sparsification framework. It introduces the first unified token importance evaluation and pruning mechanism across all network layers, ensuring end-to-end sparsity. Additionally, it designs an automatic dimensionality-reduction transformation from sparse GEMM to dense GEMM, replacing costly sparse computations with lower-dimensional dense ones. The method supports multi-layer joint sparsification, dynamic dimension compression, and framework-agnostic integration via a single line of code. Experiments demonstrate that at a 40% token pruning rate, backpropagation accelerates by 35.1% and end-to-end training speed improves by 22.0%. For TinyLlama trained on 8 GPUs, total training time reduces from 4.7 to 3.5 days, yielding a 16.3% utility gain.

Technology Category

Application Category

📝 Abstract
Token filtering has been proposed to enhance utility of large language models (LLMs) by eliminating inconsequential tokens during training. While using fewer tokens should reduce computational workloads, existing studies have not succeeded in achieving higher efficiency. This is primarily due to the insufficient sparsity caused by filtering tokens only in the output layers, as well as inefficient sparse GEMM (General Matrix Multiplication), even when having sufficient sparsity. This paper presents Collider, a system unleashing the full efficiency of token filtering in LLM training. At its core, Collider filters activations of inconsequential tokens across all layers to maintain sparsity. Additionally, it features an automatic workflow that transforms sparse GEMM into dimension-reduced dense GEMM for optimized efficiency. Evaluations on three LLMs-TinyLlama-1.1B, Qwen2.5-1.5B, and Phi1.5-1.4B-demonstrate that Collider reduces backpropagation time by up to 35.1% and end-to-end training time by up to 22.0% when filtering 40% of tokens. Utility assessments of training TinyLlama on 15B tokens indicate that Collider sustains the utility advancements of token filtering by relatively improving model utility by 16.3% comparing to regular training, and reduces training time from 4.7 days to 3.5 days using 8 GPUs. Collider is designed for easy integration into existing LLM training frameworks, allowing systems already using token filtering to accelerate training with just one line of code.
Problem

Research questions and friction points this paper is trying to address.

Large-scale Language Models
Computational Efficiency
Word Pruning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Word Pruning
Automated Optimization
Efficient Training
🔎 Similar Papers
No similar papers found.