Sim4IA-Bench: A User Simulation Benchmark Suite for Next Query and Utterance Prediction

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing user simulation studies lack reproducible and interpretable evaluation benchmarks, particularly for next-query and next-utterance prediction tasks, due to the absence of authentic conversational context. To address this, we propose Sim4IA-Bench—the first benchmark for user behavior simulation in information retrieval—built upon 160 real search sessions (including a publicly available subset of 70). It establishes the first systematic alignment between authentic interaction trajectories and multi-turn simulated predictions. We introduce a dual-task evaluation framework: Task A (next-query prediction) and Task B (next-utterance prediction), incorporating novel metrics tailored to intent drift and query reformulation. The benchmark leverages high-fidelity simulation environments derived from CORE search engine data. Sim4IA-Bench provides a unified, reproducible platform for evaluating user simulators and significantly advances interaction-aware retrieval evaluation.

Technology Category

Application Category

📝 Abstract
Validating user simulation is a difficult task due to the lack of established measures and benchmarks, which makes it challenging to assess whether a simulator accurately reflects real user behavior. As part of the Sim4IA Micro-Shared Task at the Sim4IA Workshop, SIGIR 2025, we present Sim4IA-Bench, a simulation benchmark suit for the prediction of the next queries and utterances, the first of its kind in the IR community. Our dataset as part of the suite comprises 160 real-world search sessions from the CORE search engine. For 70 of these sessions, up to 62 simulator runs are available, divided into Task A and Task B, in which different approaches predicted users next search queries or utterances. Sim4IA-Bench provides a basis for evaluating and comparing user simulation approaches and for developing new measures of simulator validity. Although modest in size, the suite represents the first publicly available benchmark that links real search sessions with simulated next-query predictions. In addition to serving as a testbed for next query prediction, it also enables exploratory studies on query reformulation behavior, intent drift, and interaction-aware retrieval evaluation. We also introduce a new measure for evaluating next-query predictions in this task. By making the suite publicly available, we aim to promote reproducible research and stimulate further work on realistic and explainable user simulation for information access: https://github.com/irgroup/Sim4IA-Bench.
Problem

Research questions and friction points this paper is trying to address.

Lack of established measures and benchmarks for validating user simulation accuracy
Difficulty in assessing whether simulators accurately reflect real user behavior
Need for reproducible evaluation of next query and utterance prediction methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sim4IA-Bench benchmark suite for user simulation
Dataset with real search sessions and simulated predictions
New measure for evaluating next-query prediction accuracy
🔎 Similar Papers
No similar papers found.