🤖 AI Summary
This study investigates the feasibility of building a unified speech-to-text model under strict copyright constraints (exclusively CC-BY–licensed data) and model size limitations (<2B parameters). We propose a two-stage training paradigm: first, modality-aligned pretraining to jointly optimize a continuous speech encoder and a lightweight language decoder in an end-to-end manner; second, instruction fine-tuning augmented with controllable synthetic data to enhance generalization. To our knowledge, this is the first work achieving direct alignment between small language models and continuous speech representations on the IWSLT short-track benchmark. Our approach achieves state-of-the-art performance across three tasks—automatic speech recognition, speech translation, and spoken question answering—demonstrating that high-quality data curation and architecture co-design significantly improve cross-task generalization in compact models. The method thus strikes a balanced trade-off among computational efficiency, task performance, and licensing compliance.
📝 Abstract
This paper presents the IT-IST submission to the IWSLT 2025 Shared Task on Instruction Following Speech Processing. We submit results for the Short Track, i.e., speech recognition, translation, and spoken question answering. Our model is a unified speech-to-text model that integrates a pre-trained continuous speech encoder and text decoder through a first phase of modality alignment and a second phase of instruction fine-tuning. Crucially, we focus on using small-scale language model backbones (<2B) and restrict to high-quality, CC-BY data along with synthetic data generation to supplement existing resources.