🤖 AI Summary
This study investigates the input distribution of filler–gap dependencies in child language acquisition to evaluate whether their acquisition relies on innate grammatical knowledge or is driven solely by linguistic input. To this end, we develop the first automated system that integrates both constituency and dependency parsing to perform fine-grained annotation of three core types of filler–gap constructions and their extraction sites across 57 English CHILDES corpora. The system demonstrates strong performance on human-validated data, enabling the first large-scale quantitative analysis of such structures in child-directed speech. Our findings reveal systematic asymmetries in input frequency and extraction position, offering empirical insights into the mechanisms of language acquisition and providing a foundation for future training of computational language models.
📝 Abstract
Children's acquisition of filler-gap dependencies has been argued by some to depend on innate grammatical knowledge, while others suggest that the distributional evidence available in child-directed speech suffices. Unfortunately, the relevant input is difficult to quantify at scale with fine granularity, making this question difficult to resolve. We present a system that identifies three core filler-gap constructions in spoken English corpora -- matrix wh-questions, embedded wh-questions, and relative clauses -- and further identifies the extraction site (i.e., subject vs. object vs. adjunct). Our approach combines constituency and dependency parsing, leveraging their complementary strengths for construction classification and extraction site identification. We validate the system on human-annotated data and find that it scores well across most categories. Applying the system to 57 English CHILDES corpora, we are able to characterize children's filler-gap input and their filler-gap production trajectories over the course of development, including construction-specific frequencies and extraction-site asymmetries. The resulting fine-grained labels enable future work in both acquisition and computational studies, which we demonstrate with a case study using filtered corpus training with language models.