🤖 AI Summary
This work addresses the challenges of multiword expression (MWE) identification, particularly the issues of class imbalance and the difficulty in accurately detecting discontinuous and noun-type MWEs. The authors propose a lightweight yet effective framework that reformulates the task as a token-level binary classification problem with START/END/INSIDE labels, integrating noun phrase chunking and dependency syntactic features while employing oversampling to mitigate data imbalance. Evaluated on the CoAM dataset, their DeBERTa-v3-large–based model achieves an F1 score of 69.8%, surpassing the previous state-of-the-art by 12 percentage points while using only 1/165th the number of parameters of comparable large models. Further evaluation on STREUSLE yields an F1 of 78.9%, demonstrating the method’s strong generalization and effectiveness.
📝 Abstract
We present a comprehensive approach for multiword expression (MWE) identification that combines binary token-level classification, linguistic feature integration, and data augmentation. Our DeBERTa-v3-large model achieves 69.8% F1 on the CoAM dataset, surpassing the best results (Qwen-72B, 57.8% F1) on this dataset by 12 points while using 165x fewer parameters. We achieve this performance by (1) reformulating detection as binary token-level START/END/INSIDE classification rather than span-based prediction, (2) incorporating NP chunking and dependency features that help discontinuous and NOUN-type MWEs identification, and (3) applying oversampling that addresses severe class imbalance in the training data. We confirm the generalization of our method on the STREUSLE dataset, achieving 78.9% F1. These results demonstrate that carefully designed smaller models can substantially outperform LLMs on structured NLP tasks, with important implications for resource-constrained deployments.