đ¤ AI Summary
This paper addresses the challenges of detecting online disinformation and manipulative contentâand ensuring interpretabilityâin multilingual, multi-platform environments. Methodologically, it introduces the first unified multi-task evaluation benchmark integrating subjectivity detection, claim standardization, numeric fact-checking, and scientific network discourse processing. It proposes an end-to-end interpretable fact-checking pipeline by systematically unifying auxiliary verification tasksâincluding subjectivity analysis and claim standardizationâfor the first time. Technically, it combines multilingual NLP, fine-grained span-level classification, semantic retrieval, and normalized semantic modeling. Key contributions include: (1) a cross-lingual, cross-task unified evaluation framework; (2) significant improvements in claim understanding and evidence alignment performance; and (3) enhanced robustness and interpretability of fact-checking systemsâestablishing a reusable technical foundation for global disinformation governance.
đ Abstract
The CheckThat! lab aims to advance the development of innovative technologies designed to identify and counteract online disinformation and manipulation efforts across various languages and platforms. The first five editions focused on key tasks in the information verification pipeline, including check-worthiness, evidence retrieval and pairing, and verification. Since the 2023 edition, the lab has expanded its scope to address auxiliary tasks that support research and decision-making in verification. In the 2025 edition, the lab revisits core verification tasks while also considering auxiliary challenges. Task 1 focuses on the identification of subjectivity (a follow-up from CheckThat! 2024), Task 2 addresses claim normalization, Task 3 targets fact-checking numerical claims, and Task 4 explores scientific web discourse processing. These tasks present challenging classification and retrieval problems at both the document and span levels, including multilingual settings.