ECHO: An Open Research Platform for Evaluation of Chat, Human Behavior, and Outcomes

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an open, low-code research platform that addresses the current lack of unified infrastructure for studying human interactions with conversational AI and search engines across paradigms, in a reproducible and technically accessible manner. For the first time, it integrates dialogue-based and search-based information-seeking paradigms within a single framework, while incorporating features such as user consent management, background surveys, task orchestration, fine-grained interaction logging, and structured data export. The platform enables end-to-end human-AI interaction experiments, substantially lowering the technical barriers to human-centered AI evaluation. It thus provides a scalable and reproducible mixed-methods research infrastructure for the fields of information retrieval, human-computer interaction, and social sciences.

Technology Category

Application Category

📝 Abstract
ECHO (Evaluation of Chat, Human behavior, and Outcomes) is an open research platform designed to support reproducible, mixed-method studies of human interaction with both conversational AI systems and Web search engines. It enables researchers from varying disciplines to orchestrate end-to-end experimental workflows that integrate consent and background surveys, chat-based and search-based information-seeking sessions, writing or judgment tasks, and pre- and post-task evaluations within a unified, low-coding-load framework. ECHO logs fine-grained interaction traces and participant responses, and exports structured datasets for downstream analysis. By supporting both chat and search alongside flexible evaluation instruments, ECHO lowers technical barriers for studying learning, decision making, and user experience across different information access paradigms, empowering researchers from information retrieval, HCI, and the social sciences to conduct scalable and reproducible human-centered AI evaluations.
Problem

Research questions and friction points this paper is trying to address.

conversational AI
human behavior
information retrieval
user experience
reproducible research
Innovation

Methods, ideas, or system contributions that make the work stand out.

open research platform
conversational AI evaluation
mixed-method studies
low-code experimental framework
structured interaction logging
🔎 Similar Papers
No similar papers found.