Human-Centric Evaluation for Foundation Models

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current foundation model evaluations over-rely on objective metrics, neglecting authentic human experience. Method: We propose a Human-Centered Evaluation (HCE) framework that engages humans and LLMs in collaborative open-ended research tasks, conducting empirical assessments across three dimensions—problem-solving efficacy, information quality, and interaction experience—with over 540 participant sessions. Contribution/Results: We introduce the first interdisciplinary, multi-model, and reproducible subjective evaluation benchmark and open-source dataset (hosted on GitHub), integrating collaborative task design, structured feedback collection, and mixed qualitative–quantitative analysis. Results show Grok-3 achieves the highest overall performance, followed by DeepSeek-R1 and Gemini 2.5, while o3-mini lags comparatively. This work establishes a novel paradigm for subjective LLM evaluation grounded in real-world usage scenarios.

Technology Category

Application Category

📝 Abstract
Currently, nearly all evaluations of foundation models focus on objective metrics, emphasizing quiz performance to define model capabilities. While this model-centric approach enables rapid performance assessment, it fails to reflect authentic human experiences. To address this gap, we propose a Human-Centric subjective Evaluation (HCE) framework, focusing on three core dimensions: problem-solving ability, information quality, and interaction experience. Through experiments involving Deepseek R1, OpenAI o3 mini, Grok 3, and Gemini 2.5, we conduct over 540 participant-driven evaluations, where humans and models collaborate on open-ended research tasks, yielding a comprehensive subjective dataset. This dataset captures diverse user feedback across multiple disciplines, revealing distinct model strengths and adaptability. Our findings highlight Grok 3's superior performance, followed by Deepseek R1 and Gemini 2.5, with OpenAI o3 mini lagging behind. By offering a novel framework and a rich dataset, this study not only enhances subjective evaluation methodologies but also lays the foundation for standardized, automated assessments, advancing LLM development for research and practical scenarios. Our dataset link is https://github.com/yijinguo/Human-Centric-Evaluation.
Problem

Research questions and friction points this paper is trying to address.

Proposes human-centric evaluation for foundation models
Focuses on problem-solving, information quality, interaction experience
Conducts 540 participant-driven evaluations across multiple models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Human-Centric Evaluation (HCE) framework
Conducts 540 participant-driven model evaluations
Creates comprehensive subjective dataset for LLMs
🔎 Similar Papers
No similar papers found.