🤖 AI Summary
This study addresses the challenge of assessing interaction quality in English-as-a-Second-Language (ESL) spoken dialogues. We propose the first interpretable evaluation framework that jointly models macro-level interaction labels (e.g., topic management) and micro-level linguistic features (e.g., pronouns, echoes, referring expressions). Leveraging manually annotated ESL dialogue data, we employ XGBoost and SVM to conduct feature importance analysis and build regression/classification models for interaction quality prediction. Our analysis systematically reveals, for the first time, how fine-grained linguistic signals predict high-level interaction quality: among 17 features—including reference words—several exhibit statistically significant effects, with pronoun usage demonstrating particularly strong predictive power. The findings empirically validate novel mappings between linguistic form and communicative competence, and enable automated, multidimensional, and interpretable assessment of ESL spoken interaction ability.
📝 Abstract
We present an evaluation framework for interactive dialogue assessment in the context of English as a Second Language (ESL) speakers. Our framework collects dialogue-level interactivity labels (e.g., topic management; 4 labels in total) and micro-level span features (e.g., backchannels; 17 features in total). Given our annotated data, we study how the micro-level features influence the (higher level) interactivity quality of ESL dialogues by constructing various machine learning-based models. Our results demonstrate that certain micro-level features strongly correlate with interactivity quality, like reference word (e.g., she, her, he), revealing new insights about the interaction between higher-level dialogue quality and lower-level linguistic signals. Our framework also provides a means to assess ESL communication, which is useful for language assessment.