Instituto de Telecomunicac{c}~oes at IWSLT 2025: Aligning Small-Scale Speech and Language Models for Speech-to-Text Learning

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the feasibility of building a unified speech-to-text model under strict copyright constraints (exclusively CC-BY–licensed data) and model size limitations (<2B parameters). We propose a two-stage training paradigm: first, modality-aligned pretraining to jointly optimize a continuous speech encoder and a lightweight language decoder in an end-to-end manner; second, instruction fine-tuning augmented with controllable synthetic data to enhance generalization. To our knowledge, this is the first work achieving direct alignment between small language models and continuous speech representations on the IWSLT short-track benchmark. Our approach achieves state-of-the-art performance across three tasks—automatic speech recognition, speech translation, and spoken question answering—demonstrating that high-quality data curation and architecture co-design significantly improve cross-task generalization in compact models. The method thus strikes a balanced trade-off among computational efficiency, task performance, and licensing compliance.

Technology Category

Application Category

📝 Abstract
This paper presents the IT-IST submission to the IWSLT 2025 Shared Task on Instruction Following Speech Processing. We submit results for the Short Track, i.e., speech recognition, translation, and spoken question answering. Our model is a unified speech-to-text model that integrates a pre-trained continuous speech encoder and text decoder through a first phase of modality alignment and a second phase of instruction fine-tuning. Crucially, we focus on using small-scale language model backbones (<2B) and restrict to high-quality, CC-BY data along with synthetic data generation to supplement existing resources.
Problem

Research questions and friction points this paper is trying to address.

Aligning small-scale speech and language models
Unified speech-to-text model for multiple tasks
Using limited high-quality data with synthetic supplements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified speech-to-text model integration
Small-scale language model backbones
Synthetic data generation supplement
🔎 Similar Papers
No similar papers found.
Giuseppe Attanasio
Giuseppe Attanasio
Postdoctoral Researcher, Instituto de Telecomunicações
AIFairnessTransparencySafety
S
Sonal Sannigrahi
Instituto de Telecomunicações, Lisbon, Portugal; Instituto Superior Técnico, Universidade de Lisboa, Portugal
B
Ben Peters
Instituto de Telecomunicações, Lisbon, Portugal
A
André F.T. Martins
Instituto de Telecomunicações, Lisbon, Portugal; Instituto Superior Técnico, Universidade de Lisboa, Portugal; Unbabel, Lisbon, Portugal