🤖 AI Summary
To address the trade-offs among efficiency, robustness, and multilingual coverage in multilingual automatic speech recognition (ASR) and automatic speech translation (AST) models, this paper proposes a two-stage dynamic data balancing training framework. The architecture integrates a FastConformer encoder with a Transformer decoder and incorporates a non-speech data suppression mechanism to mitigate hallucination and improve timestamp accuracy. Trained on 1.7 million hours of diverse multilingual speech data, the model integrates NeMo Forced Aligner (NFA) and an auxiliary CTC module. Experimental results show that Parakeet-TDT-0.6B-v3 (600M parameters) outperforms Whisper-large-v3 on English ASR while achieving 10× faster inference. Its multilingual ASR/AST performance matches Seamless-M4T-v2-large, supporting 25 European languages with low latency and high accuracy.
📝 Abstract
This report introduces Canary-1B-v2, a fast, robust multilingual model for Automatic Speech Recognition (ASR) and Speech-to-Text Translation (AST). Built with a FastConformer encoder and Transformer decoder, it supports 25 languages primarily European. The model was trained on 1.7M hours of total data samples, including Granary and NeMo ASR Set 3.0, with non-speech audio added to reduce hallucinations for ASR and AST. We describe its two-stage pre-training and fine-tuning process with dynamic data balancing, as well as experiments with an nGPT encoder. Results show nGPT scales well with massive data, while FastConformer excels after fine-tuning. For timestamps, Canary-1B-v2 uses the NeMo Forced Aligner (NFA) with an auxiliary CTC model, providing reliable segment-level timestamps for ASR and AST. Evaluations show Canary-1B-v2 outperforms Whisper-large-v3 on English ASR while being 10x faster, and delivers competitive multilingual ASR and AST performance against larger models like Seamless-M4T-v2-large and LLM-based systems. We also release Parakeet-TDT-0.6B-v3, a successor to v2, offering multilingual ASR across the same 25 languages with just 600M parameters.