🤖 AI Summary
It remains unclear whether child-directed language (CDL) universally enhances syntactic learning in language models across languages and modeling paradigms. Method: We conduct a systematic evaluation of CDL’s impact on syntactic competence across English, French, and German, under both masked and causal language modeling. We introduce FIT-CLAMS—a frequency-controlled syntactic evaluation framework—that isolates syntactic ability from lexical frequency confounds, exposing pervasive frequency bias in standard syntactic benchmarks. Results: CDL-trained models underperform Wikipedia-based baselines on most established syntactic benchmarks; under FIT-CLAMS, their apparent gains vanish entirely, revealing that prior “improvements” stem solely from CDL’s skewed lexical frequency distribution—not superior syntactic generalization. Our core contributions are: (1) falsifying the hypothesis of CDL’s universal benefit for syntactic generalization; (2) establishing a controlled, reproducible paradigm for syntactic evaluation; and (3) delivering a critical methodological caution for computational models of language acquisition.
📝 Abstract
Seminal work by Huebner et al. (2021) showed that language models (LMs) trained on English Child-Directed Language (CDL) can reach similar syntactic abilities as LMs trained on much larger amounts of adult-directed written text, suggesting that CDL could provide more effective LM training material than the commonly used internet-crawled data. However, the generalizability of these results across languages, model types, and evaluation settings remains unclear. We test this by comparing models trained on CDL vs. Wikipedia across two LM objectives (masked and causal), three languages (English, French, German), and three syntactic minimal-pair benchmarks. Our results on these benchmarks show inconsistent benefits of CDL, which in most cases is outperformed by Wikipedia models. We then identify various shortcomings in previous benchmarks, and introduce a novel testing methodology, FIT-CLAMS, which uses a frequency-controlled design to enable balanced comparisons across training corpora. Through minimal pair evaluations and regression analysis we show that training on CDL does not yield stronger generalizations for acquiring syntax and highlight the importance of controlling for frequency effects when evaluating syntactic ability.