🤖 AI Summary
This work addresses the low data efficiency in training large speech foundation models, tackling two core challenges: the representational gap between speech and text modalities, and their inherent sequence-length mismatch. We propose a lightweight cross-modal alignment architecture integrating learnable cross-modal projection, dynamic sequence compression, and parameter-efficient fine-tuning. Our method achieves state-of-the-art performance on speech translation and the AIR-Bench benchmark—surpassing Qwen2-Audio—using only 2% of its labeled training data (i.e., 1/50), while preserving strong conversational capabilities. It reduces overall training data requirements by 98% compared to prior approaches, significantly outperforming existing methods. Extensive evaluation across multiple speech understanding and translation tasks validates its effectiveness. The implementation is publicly available.
📝 Abstract
Existing end-to-end speech large language models (LLMs) usually rely on large-scale annotated data for training, while data-efficient training has not been discussed in depth. We focus on two fundamental problems between speech and text: the representation space gap and sequence length inconsistency. We propose Soundwave, which utilizes an efficient training strategy and a novel architecture to address these issues. Results show that Soundwave outperforms the advanced Qwen2-Audio in speech translation and AIR-Bench speech tasks, using only one-fiftieth of the training data. Further analysis shows that Soundwave still retains its intelligence during conversation. The project is available at https://github.com/FreedomIntelligence/Soundwave.