Speechless: Speech Instruction Training Without Speech for Low Resource Languages

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low-resource languages face a dual bottleneck in speech command understanding: scarcity of high-quality spoken command data and limited availability of suitable text-to-speech (TTS) models. Method: This paper proposes a novel, TTS-free paradigm for speech command understanding. It aligns Whisper encoder embeddings with synthetically generated text commands in semantic space, enabling large language models to comprehend speech inputs via pure text-based instruction tuning—without synthesizing speech. The approach comprises three components: cross-modal semantic alignment using the Whisper encoder, text-instruction-supervised fine-tuning, and semantic-space knowledge distillation—forming a zero-TTS training framework. Contribution/Results: Experiments demonstrate substantial improvements in speech command understanding accuracy for low-resource languages—even without any real spoken command data. At inference, the model processes raw audio directly. Training efficiency increases by over 3× compared to TTS-dependent baselines. To our knowledge, this is the first work to bypass TTS entirely and achieve speech–text command alignment solely at the semantic level.

Technology Category

Application Category

📝 Abstract
The rapid growth of voice assistants powered by large language models (LLM) has highlighted a need for speech instruction data to train these systems. Despite the abundance of speech recognition data, there is a notable scarcity of speech instruction data, which is essential for fine-tuning models to understand and execute spoken commands. Generating high-quality synthetic speech requires a good text-to-speech (TTS) model, which may not be available to low resource languages. Our novel approach addresses this challenge by halting synthesis at the semantic representation level, bypassing the need for TTS. We achieve this by aligning synthetic semantic representations with the pre-trained Whisper encoder, enabling an LLM to be fine-tuned on text instructions while maintaining the ability to understand spoken instructions during inference. This simplified training process is a promising approach to building voice assistant for low-resource languages.
Problem

Research questions and friction points this paper is trying to address.

Addressing scarcity of speech instruction data for low-resource languages
Eliminating need for text-to-speech models in speech instruction training
Enabling LLMs to understand spoken commands without speech data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Halts synthesis at semantic representation level
Aligns synthetic semantics with Whisper encoder
Fine-tunes LLM on text instructions only
🔎 Similar Papers
No similar papers found.
Alan Dao
Alan Dao
AI Researcher
Artificial Intelligence
D
Dinh Bach Vu
Menlo Research
Huy Hoang Ha
Huy Hoang Ha
Menlo Research | UGA
LLMMultimodal-model
Tuan Le Duc Anh
Tuan Le Duc Anh
ex Moreh, Menlo Research, Viettel
LLM SystemLLMOps
S
Shreyas Gopal
CCDS, Nanyang Technological University, Singapore
Y
Yue Heng Yeo
CCDS, Nanyang Technological University, Singapore
W
Warren Keng Hoong Low
Menlo Research
E
Eng Siong Chng
CCDS, Nanyang Technological University, Singapore
Jia Qi Yip
Jia Qi Yip
Menlo Research
signal processingspeech separationspeaker verification