🤖 AI Summary
To address the lack of reliable evaluation methods for long-text generation in scientific literature tasks, this paper introduces SciArena-Eval—the first open, collaborative assessment platform for open-ended scientific question answering. Methodologically, it innovatively adopts a human preference–based crowdsourcing voting mechanism, integrating the PBHF framework, collective intelligence ranking, and pairwise model comparisons to construct a large-scale, human-annotated preference dataset. Its contributions are threefold: (1) it releases the first science-oriented meta-evaluation benchmark, enabling capability modeling and validation of automated evaluation systems; (2) it integrates 23 mainstream foundation models and collects over 13,000 valid votes from domain-expert researchers, empirically confirming evaluation consistency and question diversity; and (3) it demonstrates significant misalignment between current automatic evaluation metrics and human judgments through rigorous empirical analysis.
📝 Abstract
We present SciArena, an open and collaborative platform for evaluating foundation models on scientific literature tasks. Unlike traditional benchmarks for scientific literature understanding and synthesis, SciArena engages the research community directly, following the Chatbot Arena evaluation approach of community voting on model comparisons. By leveraging collective intelligence, SciArena offers a community-driven evaluation of model performance on open-ended scientific tasks that demand literature-grounded, long-form responses. The platform currently supports 23 open-source and proprietary foundation models and has collected over 13,000 votes from trusted researchers across diverse scientific domains. We analyze the data collected so far and confirm that the submitted questions are diverse, aligned with real-world literature needs, and that participating researchers demonstrate strong self-consistency and inter-annotator agreement in their evaluations. We discuss the results and insights based on the model ranking leaderboard. To further promote research in building model-based automated evaluation systems for literature tasks, we release SciArena-Eval, a meta-evaluation benchmark based on our collected preference data. The benchmark measures the accuracy of models in judging answer quality by comparing their pairwise assessments with human votes. Our experiments highlight the benchmark's challenges and emphasize the need for more reliable automated evaluation methods.