Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMs

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of invalid syntactic structures, low accuracy, and hallucination in large language models (LLMs) for dependency parsing, this paper proposes a stepwise instruction strategy: first performing part-of-speech tagging, then predicting dependency relations, and enforcing a lightweight, structurally rigorous CoNLL-U–style tabular output format. By abandoning end-to-end generation, the method leverages prompt engineering to enable controllable, hallucination-free, and contamination-free reasoning. Evaluated on the Universal Dependencies dataset across 17 languages, it achieves state-of-the-art (SOTA) performance using zero-shot prompting alone; further gains in cross-lingual generalization are attained through multilingual joint fine-tuning. All outputs strictly satisfy dependency structure legality constraints. The core innovation lies in the first-ever integration of a stepwise reasoning instruction chain with structured output enforcement—a synergistic mechanism that ensures both interpretability and formal correctness.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have enabled impressive performance in various tasks. However, standard prompting often struggles to produce structurally valid and accurate outputs, especially in dependency parsing. We propose a novel step-by-step instruction strategy, where universal part-of-speech tagging precedes the prediction of syntactic heads and dependency labels, and a simplified CoNLL-U like output format, our method achieves state-of-the-art accuracy on Universal Dependencies datasets across 17 languages without hallucination or contamination. We further show that multilingual fine-tuning simultaneously improves cross-language generalization performance. Our results highlight the effectiveness of explicit reasoning steps in LLM-based parsing and offer a scalable, format-consistent alternative to bracket-based approaches.
Problem

Research questions and friction points this paper is trying to address.

Improving dependency parsing accuracy in LLMs
Reducing structural errors in standard prompting outputs
Enhancing multilingual generalization in dependency parsing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Step-by-step instruction strategy for parsing
Simplified CoNLL-U like output format
Multilingual fine-tuning enhances generalization
🔎 Similar Papers
No similar papers found.
H
Hiroshi Matsuda
Megagon Labs, Tokyo, Recruit Co., Ltd.
C
Chunpeng Ma
Megagon Labs, Tokyo, Recruit Co., Ltd.
Masayuki Asahara
Masayuki Asahara
National Institute for Japanese Language and Linguistics
LinguisticsCognitive ScienceJapanese LinguisticsLexical SemanticsTreebank