🤖 AI Summary
This work proposes Pantagruel, the first unified self-supervised encoder for French that jointly learns representations from both text and speech within a single architectural framework. Addressing the absence of a cohesive and efficient approach for multimodal representation learning in French, Pantagruel introduces contextual target modeling in the feature space—a novel contribution for the French language. The model is pretrained at scale using extensive text corpora, including Wikipedia, OSCAR, and CroissantLLM, alongside speech data from Multilingual LibriSpeech, LeBenchmark, and a newly curated 100,000-hour INA-100k speech corpus. Evaluated on standard benchmarks such as FLUE and LeBenchmark, Pantagruel matches or surpasses strong baselines like CamemBERT and FlauBERT, demonstrating its effectiveness and strong generalization capabilities across modalities.
📝 Abstract
We release Pantagruel models, a new family of self-supervised encoder models for French text and speech. Instead of predicting modality-tailored targets such as textual tokens or speech units, Pantagruel learns contextualized target representations in the feature space, allowing modality-specific encoders to capture linguistic and acoustic regularities more effectively. Separate models are pre-trained on large-scale French corpora, including Wikipedia, OSCAR and CroissantLLM for text, together with MultilingualLibriSpeech, LeBenchmark, and INA-100k for speech. INA-100k is a newly introduced 100,000-hour corpus of French audio derived from the archives of the Institut National de l'Audiovisuel (INA), the national repository of French radio and television broadcasts, providing highly diverse audio data. We evaluate Pantagruel across a broad range of downstream tasks spanning both modalities, including those from the standard French benchmarks such as FLUE or LeBenchmark. Across these tasks, Pantagruel models show competitive or superior performance compared to strong French baselines such as CamemBERT, FlauBERT, and LeBenchmark2.0, while maintaining a shared architecture that can seamlessly handle either speech or text inputs. These results confirm the effectiveness of feature-space self-supervised objectives for French representation learning and highlight Pantagruel as a robust foundation for multimodal speech-text understanding.