Eval Factsheets: A Structured Framework for Documenting AI Evaluations

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI evaluation methodologies suffer from a lack of standardized, systematic documentation, severely undermining reproducibility, transparency, and trustworthy decision-making. Method: This paper introduces Eval Factsheets—a novel framework that pioneers the application of structured documentation to AI evaluation. It establishes a five-dimensional taxonomy—encompassing Context, Scope, Structure, Methodology, and Alignment—to uniformly characterize diverse evaluation paradigms, including traditional benchmarks and LLM-as-judge approaches. A taxonomy-guided questionnaire specifies mandatory and recommended fields covering the entire evaluation lifecycle. Contribution/Results: Empirical validation across multiple benchmark cases demonstrates that Eval Factsheets consistently represent heterogeneous evaluation practices, significantly enhancing cross-evaluation comparability, reproducibility, and transparency. The framework provides a foundational, extensible tool for standardizing AI evaluation documentation and practice.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of benchmarks has created significant challenges in reproducibility, transparency, and informed decision-making. However, unlike datasets and models -- which benefit from structured documentation frameworks like Datasheets and Model Cards -- evaluation methodologies lack systematic documentation standards. We introduce Eval Factsheets, a structured, descriptive framework for documenting AI system evaluations through a comprehensive taxonomy and questionnaire-based approach. Our framework organizes evaluation characteristics across five fundamental dimensions: Context (Who made the evaluation and when?), Scope (What does it evaluate?), Structure (With what the evaluation is built?), Method (How does it work?) and Alignment (In what ways is it reliable/valid/robust?). We implement this taxonomy as a practical questionnaire spanning five sections with mandatory and recommended documentation elements. Through case studies on multiple benchmarks, we demonstrate that Eval Factsheets effectively captures diverse evaluation paradigms -- from traditional benchmarks to LLM-as-judge methodologies -- while maintaining consistency and comparability. We hope Eval Factsheets are incorporated into both existing and newly released evaluation frameworks and lead to more transparency and reproducibility.
Problem

Research questions and friction points this paper is trying to address.

Lack of systematic documentation standards for AI evaluation methodologies.
Challenges in reproducibility and transparency due to benchmark proliferation.
Need for structured framework to document diverse evaluation paradigms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured framework for documenting AI evaluations
Questionnaire-based taxonomy across five dimensions
Captures diverse evaluation paradigms consistently
🔎 Similar Papers
No similar papers found.