Do We Need Distinct Representations for Every Speech Token? Unveiling and Exploiting Redundancy in Large Speech Language Models

πŸ“… 2026-04-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the excessive computational overhead in large spoken language models caused by high sampling rates, which result in unnecessarily long input sequences far exceeding semantic requirements. Through inter-layer oracle intervention analysis, the study reveals that while shallow layers retain fine-grained acoustic details, deeper layers exhibit highly structured redundancy. Leveraging this insight, the authors propose Affinity Poolingβ€”a training-free method that dynamically merges tokens based on similarity, both at the input and within deep layers. This approach preserves semantic accuracy while significantly improving efficiency: it reduces prefill FLOPs by 27.48%, decreases memory consumption by approximately 1.7Γ—, and accelerates first-token latency by about 1.1Γ—.
πŸ“ Abstract
Large Speech Language Models (LSLMs) typically operate at high token rates (tokens/s) to ensure acoustic fidelity, yet this results in sequence lengths that far exceed the underlying semantic content, incurring prohibitive inference costs. In this paper, we empirically revisit the necessity of such granular token-level processing. Through layer-wise oracle interventions, we unveil a structured redundancy hierarchy: while shallow layers encode essential acoustic details, deep layers exhibit extreme redundancy, allowing for aggressive compression. Motivated by these findings, we introduce Affinity Pooling, a training-free, similarity-based token merging mechanism. By strategically applying this method at both input and deep layers, we effectively compress speech representations without compromising semantic information. Extensive evaluations across three tasks demonstrate that our approach reduces prefilling FLOPs by 27.48\% while maintaining competitive accuracy. Practical deployment further confirms significant efficiency gains, yielding up to $\sim$1.7$\times$ memory savings and $\sim$1.1$\times$ faster time-to-first-token on long utterances. Our results challenge the necessity of fully distinct token representations, providing new perspectives on LSLM efficiency.
Problem

Research questions and friction points this paper is trying to address.

Large Speech Language Models
token redundancy
inference efficiency
speech representation
sequence length
Innovation

Methods, ideas, or system contributions that make the work stand out.

Affinity Pooling
token redundancy
speech language models
sequence compression
training-free efficiency
πŸ”Ž Similar Papers
No similar papers found.
B
Bajian Xiang
Beike Inc., Beijing, China
T
Tingwei Guo
Beike Inc., Beijing, China
Xuan Chen
Xuan Chen
Purdue University
AI security
Y
Yang Han
Beike Inc., Beijing, China