🤖 AI Summary
This study addresses the limited interpretability and poor cross-lingual generalizability of grammar induction methods relying on annotated corpora. We propose the first proxy-based large language model (LLM) framework tailored for Universal Dependencies (UD) treebanks. Our approach integrates natural language understanding, programmatic code generation, and data-driven reasoning to tightly couple LLMs’ symbolic reasoning capabilities with structured syntactic trees, enabling end-to-end, traceable, multilingual grammatical feature extraction. Evaluated on 13 core word-order typological features across 170+ languages, our system achieves state-of-the-art performance in dominant order accuracy, corpus coverage completeness, and distributional fidelity. To our knowledge, this is the first LLM-based framework that simultaneously delivers interpretability, scalability, and robust cross-lingual syntactic analysis.
📝 Abstract
Empirical grammar research has become increasingly data-driven, but the systematic analysis of annotated corpora still requires substantial methodological and technical effort. We explore how agentic large language models (LLMs) can streamline this process by reasoning over annotated corpora and producing interpretable, data-grounded answers to linguistic questions. We introduce an agentic framework for corpus-grounded grammatical analysis that integrates concepts such as natural-language task interpretation, code generation, and data-driven reasoning. As a proof of concept, we apply it to Universal Dependencies (UD) corpora, testing it on multilingual grammatical tasks inspired by the World Atlas of Language Structures (WALS). The evaluation spans 13 word-order features and over 170 languages, assessing system performance across three complementary dimensions - dominant-order accuracy, order-coverage completeness, and distributional fidelity - which reflect how well the system generalizes, identifies, and quantifies word-order variations. The results demonstrate the feasibility of combining LLM reasoning with structured linguistic data, offering a first step toward interpretable, scalable automation of corpus-based grammatical inquiry.