Dependency Parsing is More Parameter-Efficient with Normalization

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In dependency parsing, the biaffine scoring mechanism suffers from parameter redundancy, inefficient training, and poor generalization due to the absence of explicit normalization. This paper provides the first theoretical and empirical demonstration that explicit normalization of biaffine scores prior to softmax is essential. To address this, we propose a normalized biaffine scoring module incorporating input variance correction, integrated into a single-hop parser based on an N-layer BiLSTM architecture. Evaluated on six mainstream dependency treebanks, our approach achieves state-of-the-art performance on two datasets. Moreover, it significantly reduces model parameters, decreases training sample requirements, and simultaneously improves robustness and generalization. The proposed method establishes a new paradigm for efficient, lightweight dependency parsing.

Technology Category

Application Category

📝 Abstract
Dependency parsing is the task of inferring natural language structure, often approached by modeling word interactions via attention through biaffine scoring. This mechanism works like self-attention in Transformers, where scores are calculated for every pair of words in a sentence. However, unlike Transformer attention, biaffine scoring does not use normalization prior to taking the softmax of the scores. In this paper, we provide theoretical evidence and empirical results revealing that a lack of normalization necessarily results in overparameterized parser models, where the extra parameters compensate for the sharp softmax outputs produced by high variance inputs to the biaffine scoring function. We argue that biaffine scoring can be made substantially more efficient by performing score normalization. We conduct experiments on six datasets for semantic and syntactic dependency parsing using a one-hop parser. We train N-layer stacked BiLSTMs and evaluate the parser's performance with and without normalizing biaffine scores. Normalizing allows us to beat the state of the art on two datasets, with fewer samples and trainable parameters. Code: https://anonymous.4open.science/r/EfficientSDP-70C1
Problem

Research questions and friction points this paper is trying to address.

Lack of normalization in biaffine scoring causes overparameterized parser models
Biaffine scoring efficiency can be improved by score normalization
Normalization enhances dependency parsing performance with fewer parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Normalization improves biaffine scoring efficiency
Fewer parameters achieve state-of-the-art performance
BiLSTMs validate normalization benefits empirically
🔎 Similar Papers
No similar papers found.