Understanding Syntactic Generalization in Structure-inducing Language Models

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the syntactic generalization capabilities of structure-induced language models (SiLMs), addressing gaps in existing work regarding evaluation framework systematicity and cross-model comparability. We uniformly assess three representative SiLMs—Structformer, UDGN, and GPST—on both real-world corpora and synthetic bracketed expressions, measuring syntactic representation quality, grammaticality judgment accuracy, and training dynamics. Results show that GPST exhibits the most robust syntactic generalization across tasks, particularly outperforming others in modeling long-distance dependencies. Moreover, small-scale models trained on controlled synthetic data efficiently expose fundamental syntactic competence, enabling a lightweight, reproducible evaluation paradigm. This work provides empirical guidance for SiLM architecture selection and establishes a principled methodology for syntactic generalization assessment.

Technology Category

Application Category

📝 Abstract
Structure-inducing Language Models (SiLM) are trained on a self-supervised language modeling task, and induce a hierarchical sentence representation as a byproduct when processing an input. A wide variety of SiLMs have been proposed. However, these have typically been evaluated on a relatively small scale, and evaluation of these models has systematic gaps and lacks comparability. In this work, we study three different SiLM architectures using both natural language (English) corpora and synthetic bracketing expressions: Structformer (Shen et al., 2021), UDGN (Shen et al., 2022) and GPST (Hu et al., 2024). We compare them with respect to (i) properties of the induced syntactic representations (ii) performance on grammaticality judgment tasks, and (iii) training dynamics. We find that none of the three architectures dominates across all evaluation metrics. However, there are significant differences, in particular with respect to the induced syntactic representations. The Generative Pretrained Structured Transformer (GPST; Hu et al. 2024) performs most consistently across evaluation settings, and outperforms the other models on long-distance dependencies in bracketing expressions. Furthermore, our study shows that small models trained on large amounts of synthetic data provide a useful testbed for evaluating basic model properties.
Problem

Research questions and friction points this paper is trying to address.

Evaluating syntactic generalization in structure-inducing language models
Comparing three architectures on induced representations and grammaticality judgments
Assessing model performance on long-distance dependencies and training dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated three SiLM architectures systematically
Used synthetic bracketing expressions for testing
GPST model excelled in long-distance dependencies
🔎 Similar Papers
No similar papers found.