Overview of the Sensemaking Task at the ELOQUENT 2025 Lab: LLMs as Teachers, Students and Evaluators

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of evaluating whether large language models (LLMs) faithfully comprehend input texts. We propose a novel three-stage automated assessment framework: (1) a teacher LLM generates questions grounded in the source text; (2) a student LLM answers them; and (3) an evaluator LLM scores responses strictly against the original text. Crucially, we unify all three roles within a single LLM—enabling role specialization without architectural modification—and introduce verifiable high-level evaluation criteria alongside adversarial testing. Our experiments on English, German, Ukrainian, and Czech texts reveal that existing automated evaluation methods are highly susceptible to perturbations and fail to ensure textual fidelity. Comparative evaluations across four competing systems and commercial LLM baselines confirm the feasibility of the pipeline, yet LLM-based scoring exhibits significant divergence from human judgments. Enhancing reasoning fidelity—i.e., strict adherence to source-text evidence—remains a critical open challenge.

Technology Category

Application Category

📝 Abstract
ELOQUENT is a set of shared tasks that aims to create easily testable high-level criteria for evaluating generative language models. Sensemaking is one such shared task. In Sensemaking, we try to assess how well generative models ``make sense out of a given text'' in three steps inspired by exams in a classroom setting: (1) Teacher systems should prepare a set of questions, (2) Student systems should answer these questions, and (3) Evaluator systems should score these answers, all adhering rather strictly to a given set of input materials. We report on the 2025 edition of Sensemaking, where we had 7 sources of test materials (fact-checking analyses of statements, textbooks, transcribed recordings of a lecture, and educational videos) spanning English, German, Ukrainian, and Czech languages. This year, 4 teams participated, providing us with 2 Teacher submissions, 2 Student submissions, and 2 Evaluator submissions. We added baselines for Teacher and Student using commercial large language model systems. We devised a fully automatic evaluation procedure, which we compare to a minimalistic manual evaluation. We were able to make some interesting observations. For the first task, the creation of questions, better evaluation strategies will still have to be devised because it is difficult to discern the quality of the various candidate question sets. In the second task, question answering, the LLMs examined overall perform acceptably, but restricting their answers to the given input texts remains problematic. In the third task, evaluation of question answers, our adversarial tests reveal that systems using the LLM-as-a-Judge paradigm erroneously rate both garbled question-answer pairs and answers to mixed-up questions as acceptable.
Problem

Research questions and friction points this paper is trying to address.

Evaluate generative models' sensemaking via teacher-student-evaluator tasks
Assess question creation, answering, and scoring in multilingual contexts
Address limitations in LLM-based evaluation of question-answer quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs as Teachers generate exam-like questions
LLMs as Students answer generated questions
LLMs as Evaluators score answers automatically