🤖 AI Summary
This study addresses the challenges posed by spoken-language code-switching, which frequently exhibits disfluencies, repetitions, and omissions that violate Universal Dependencies (UD) assumptions, thereby degrading the performance of existing parsers and large language models. Conventional evaluation metrics often fail to distinguish genuine parsing errors from linguistically valid variation. To tackle this, the authors develop a linguistically grounded typology of spoken phenomena and introduce SpokeBench, an expert-annotated benchmark. They further propose FLEX-UD, a fuzziness-aware evaluation metric that more accurately assesses parsing quality, and design DECAP, a decoupled agent-based parsing framework that separates the handling of spoken-language phenomena from core syntactic analysis. Experiments demonstrate that DECAP improves parsing performance by up to 52.6% over current methods without requiring retraining, significantly enhancing both robustness and interpretability.
📝 Abstract
Spoken code-switching (CSW) challenges syntactic parsing in ways not observed in written text. Disfluencies, repetition, ellipsis, and discourse-driven structure routinely violate standard Universal Dependencies (UD) assumptions, causing parsers and large language models (LLMs) to fail despite strong performance on written data. These failures are compounded by rigid evaluation metrics that conflate genuine structural errors with acceptable variation. In this work, we present a systems-oriented approach to spoken CSW parsing. We introduce a linguistically grounded taxonomy of spoken CSW phenomena and SpokeBench, an expert-annotated gold benchmark designed to test spoken-language structure beyond standard UD assumptions. We further propose FLEX-UD, an ambiguity-aware evaluation metric, which reveals that existing parsing techniques perform poorly on spoken CSW by penalizing linguistically plausible analyses as errors. We then propose DECAP, a decoupled agentic parsing framework that isolates spoken-phenomena handling from core syntactic analysis. Experiments show that DECAP produces more robust and interpretable parses without retraining and achieves up to 52.6% improvements over existing parsing techniques. FLEX-UD evaluations further reveal qualitative improvements that are masked by standard metrics.