SpeechMapper: Speech-to-text Embedding Projector for LLMs

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost, susceptibility to overfitting, and limited generalization of existing speech large language models (LLMs), which typically rely on expensive end-to-end training. The authors propose an efficient and scalable method for mapping speech embeddings into LLM-compatible representations through a novel task-agnostic pretraining followed by lightweight fine-tuning paradigm. Specifically, a speech-to-text embedding projector is first pretrained without any involvement of the target LLM—enabling training on inexpensive hardware—and subsequently adapted to the target LLM via only approximately 1,000 steps of instruction tuning. This approach drastically reduces computational requirements while supporting both task-agnostic and task-specific configurations. In evaluations on speech translation and spoken question answering, the task-agnostic variant matches the performance of the best IWSLT25 model, while the task-specific variant surpasses existing methods using less data and computational resources.

Technology Category

Application Category

📝 Abstract
Current speech LLMs bridge speech foundation models to LLMs using projection layers, training all of these components on speech instruction data. This strategy is computationally intensive and susceptible to task and prompt overfitting. We present SpeechMapper, a cost-efficient speech-to-LLM-embedding training approach that mitigates overfitting, enabling more robust and generalizable models. Our model is first pretrained without the LLM on inexpensive hardware, and then efficiently attached to the target LLM via a brief 1K-step instruction tuning (IT) stage. Through experiments on speech translation and spoken question answering, we demonstrate the versatility of SpeechMapper's pretrained block, presenting results for both task-agnostic IT, an ASR-based adaptation strategy that does not train in the target task, and task-specific IT. In task-agnostic settings, Speechmapper rivals the best instruction-following speech LLM from IWSLT25, despite never being trained on these tasks, while in task-specific settings, it outperforms this model across many datasets, despite requiring less data and compute. Overall, SpeechMapper offers a practical and scalable approach for efficient, generalizable speech-LLM integration without large-scale IT.
Problem

Research questions and friction points this paper is trying to address.

speech LLMs
overfitting
projection layers
instruction tuning
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speech-to-LLM embedding
projection layer
instruction tuning
task-agnostic adaptation
efficient training
🔎 Similar Papers