🤖 AI Summary
This work proposes the first end-to-end multilingual fact-checking framework to address the proliferation of misinformation in multilingual online environments. The framework encompasses the full pipeline—from claim retrieval and numerical-temporal reasoning to full-article generation—by integrating multilingual information retrieval, machine reading comprehension, and text generation techniques. It introduces cross-lingual transfer and structured reasoning mechanisms to significantly enhance the depth and breadth of verification for scientific claims. A comprehensive evaluation suite is developed to support multi-granular assessment, ranging from snippet-level verification to the generation of complete fact-checking articles. This advancement steers the field toward more holistic, interpretable, and cross-lingually capable fact-checking systems.
📝 Abstract
The CheckThat! lab aims to advance the development of innovative technologies combating disinformation and manipulation efforts in online communication across a multitude of languages and platforms. While in early editions the focus has been on core tasks of the verification pipeline (check-worthiness, evidence retrieval, and verification), in the past three editions, the lab added additional tasks linked to the verification process. In this year's edition, the verification pipeline is at the center again with the following tasks: Task 1 on source retrieval for scientific web claims (a follow-up of the 2025 edition), Task 2 on fact-checking numerical and temporal claims, which adds a reasoning component to the 2025 edition, and Task 3, which expands the verification pipeline with generation of full-fact-checking articles. These tasks represent challenging classification and retrieval problems as well as generation challenges at the document and span level, including multilingual settings.