Learning from Child-Directed Speech in Two-Language Scenarios: A French-English Case Study

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the overreliance on English in current language model research and presents the first systematic exploration of bilingual (English–French) learning mechanisms. We extend BabyBERTa to a bilingual setting and, under strictly controlled data scales, compare the effects of child-directed speech (~2.5M tokens) versus diverse-domain corpora (~10M tokens) on monolingual, bilingual, and cross-lingual pretraining. Additionally, we introduce new French evaluation resources, including QAMR and QASRL. Results demonstrate that child-directed input substantially enhances monolingual grammatical judgment performance. Bilingual pretraining yields significant gains on textual entailment tasks—particularly for French—and this advantage consistently holds across multiple architectures, including BabyBERTa, RoBERTa, and LTG-BERT.

Technology Category

Application Category

📝 Abstract
Research on developmentally plausible language models has largely focused on English, leaving open questions about multilingual settings. We present a systematic study of compact language models by extending BabyBERTa to English-French scenarios under strictly size-matched data conditions, covering monolingual, bilingual, and cross-lingual settings. Our design contrasts two types of training corpora: (i) child-directed speech (about 2.5M tokens), following BabyBERTa and related work, and (ii) multi-domain corpora (about 10M tokens), extending the BabyLM framework to French. To enable fair evaluation, we also introduce new resources, including French versions of QAMR and QASRL, as well as English and French multi-domain corpora. We evaluate the models on both syntactic and semantic tasks and compare them with models trained on Wikipedia-only data. The results reveal context-dependent effects: training on Wikipedia consistently benefits semantic tasks, whereas child-directed speech improves grammatical judgments in monolingual settings. Bilingual pretraining yields notable gains for textual entailment, with particularly strong improvements for French. Importantly, similar patterns emerge across BabyBERTa, RoBERTa, and LTG-BERT, suggesting consistent trends across architectures.
Problem

Research questions and friction points this paper is trying to address.

child-directed speech
multilingual language modeling
bilingual acquisition
developmentally plausible models
cross-lingual learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

child-directed speech
bilingual language modeling
BabyBERTa
cross-lingual evaluation
developmentally plausible models
🔎 Similar Papers
No similar papers found.
L
Liel Binyamin
Faculty of Computer and Information Science, Institute for Applied AI Research, Data Science Research Center, Ben-Gurion University of the Negev
Elior Sulem
Elior Sulem
Ben-Gurion University of the Negev
Computational LinguisticsNatural Language Processing