🤖 AI Summary
This study systematically investigates the capability of mainstream large language models (LLMs) on Chinese classifier (measure word) prediction—a fundamental yet underexplored NLP task. We construct a standardized benchmark dataset and conduct comparative analyses between LLMs and BERT, revealing for the first time the critical advantage of bidirectional attention mechanisms in classifier prediction. Through multi-strategy masking analysis, attention visualization, and controllable fine-tuning, we probe the influence of syntactic constituents—particularly nouns—on prediction behavior. Experimental results show that even after task-specific fine-tuning, LLMs significantly underperform BERT; noun semantics exert the strongest impact on prediction accuracy. Our work establishes a new evaluation benchmark, introduces a novel analytical framework grounded in interpretability, and provides mechanistic insights into Chinese classifier modeling—advancing both theoretical understanding and practical development in morphosyntactic processing for Chinese.
📝 Abstract
Classifiers are an important and defining feature of the Chinese language, and their correct prediction is key to numerous educational applications. Yet, whether the most popular Large Language Models (LLMs) possess proper knowledge the Chinese classifiers is an issue that has largely remain unexplored in the Natural Language Processing (NLP) literature.
To address such a question, we employ various masking strategies to evaluate the LLMs' intrinsic ability, the contribution of different sentence elements, and the working of the attention mechanisms during prediction. Besides, we explore fine-tuning for LLMs to enhance the classifier performance.
Our findings reveal that LLMs perform worse than BERT, even with fine-tuning. The prediction, as expected, greatly benefits from the information about the following noun, which also explains the advantage of models with a bidirectional attention mechanism such as BERT.