🤖 AI Summary
This study addresses the identification of vagueness and evasiveness in political discourse, aiming to assess statements for clarity or evasiveness. It proposes an “evasiveness-first” hierarchical modeling paradigm: first predicting an evasiveness label and then deriving clarity, enhanced through auxiliary training and zero-shot inference strategies. Experiments employ RoBERTa-large and GPT-5.2, with the latter used for zero-shot reasoning. Results show that RoBERTa-large achieves the best performance on the public test set, while zero-shot GPT-5.2 demonstrates superior generalization on the hidden evaluation set. This work presents the first systematic evaluation of large language models’ zero-shot generalization capabilities for this task, offering a novel paradigm for computational analysis of political discourse.
📝 Abstract
This paper describes the KCLarity team's participation in CLARITY, a shared task at SemEval 2026 on classifying ambiguity and evasion techniques in political discourse. We investigate two modelling formulations: (i) directly predicting the clarity label, and (ii) predicting the evasion label and deriving clarity through the task taxonomy hierarchy. We further explore several auxiliary training variants and evaluate decoder-only models in a zero-shot setting under the evasion-first formulation. Overall, the two formulations yield comparable performance. Among encoder-based models, RoBERTa-large achieves the strongest results on the public test set, while zero-shot GPT-5.2 generalises better on the hidden evaluation set.