🤖 AI Summary
This study identifies definitional inaccuracy in mainstream AI-powered electronic dictionaries (e.g., Youdao) and its cognitive impact on Chinese as a second language (L2) learners. Employing a mixed-methods approach—integrating cognitive experiments, retrospective think-aloud protocols, learner surveys, critical corpus analysis of dictionary entries, and AI model reverse engineering—the study systematically uncovers two root causes: insufficient corpus preprocessing and semantic modeling biases in underlying LLMs. Results demonstrate that incomplete or misleading definitions significantly impair translation accuracy; learners exhibit high dependency on dictionary outputs coupled with low verification awareness. Crucially, this work pioneers a dual-track intervention framework—simultaneously advancing dictionary literacy education and AI model optimization—to enhance dictionary reliability and L2 vocabulary instruction. It provides empirically grounded insights and methodological innovations for human-AI collaborative lexicography. (149 words)
📝 Abstract
Electronic dictionaries have largely replaced paper dictionaries and become central tools for L2 learners seeking to expand their vocabulary. Users often assume these resources are reliable and rarely question the validity of the definitions provided. The accuracy of major E-dictionaries is seldom scrutinized, and little attention has been paid to how their corpora are constructed. Research on dictionary use, particularly the limitations of electronic dictionaries, remains scarce. This study adopts a combined method of experimentation, user survey, and dictionary critique to examine Youdao, one of the most widely used E-dictionaries in China. The experiment involved a translation task paired with retrospective reflection. Participants were asked to translate sentences containing words that are insufficiently or inaccurately defined in Youdao. Their consultation behavior was recorded to analyze how faulty definitions influenced comprehension. Results show that incomplete or misleading definitions can cause serious misunderstandings. Additionally, students exhibited problematic consultation habits. The study further explores how such flawed definitions originate, highlighting issues in data processing and the integration of AI and machine learning technologies in dictionary construction. The findings suggest a need for better training in dictionary literacy for users, as well as improvements in the underlying AI models used to build E-dictionaries.