Audit Cards: Contextualizing AI Evaluations

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI governance relies heavily on auditing, yet evaluation reports frequently omit critical contextual information—such as auditor identity, conflicts of interest, model access privileges, and methodological limitations—undermining interpretability, comparability, and trustworthiness. Method: We introduce the “Audit Card” framework, the first systematic, structured specification of six essential contextual dimensions required for rigorous AI assessment, integrating sociotechnical perspectives into audit practice. Grounded in a comprehensive literature review, multi-stakeholder interviews, and analysis of global governance frameworks—and validated through qualitative coding and empirical audits—we document the pervasive absence of such context in existing reports. Contribution/Results: The framework significantly enhances transparency, interpretability, and cross-report comparability of AI evaluations. It provides a scalable, verifiable disclosure paradigm to advance standardization, credibility, and real-world efficacy in AI auditing.

Technology Category

Application Category

📝 Abstract
AI governance frameworks increasingly rely on audits, yet the results of their underlying evaluations require interpretation and context to be meaningfully informative. Even technically rigorous evaluations can offer little useful insight if reported selectively or obscurely. Current literature focuses primarily on technical best practices, but evaluations are an inherently sociotechnical process, and there is little guidance on reporting procedures and context. Through literature review, stakeholder interviews, and analysis of governance frameworks, we propose"audit cards"to make this context explicit. We identify six key types of contextual features to report and justify in audit cards: auditor identity, evaluation scope, methodology, resource access, process integrity, and review mechanisms. Through analysis of existing evaluation reports, we find significant variation in reporting practices, with most reports omitting crucial contextual information such as auditors' backgrounds, conflicts of interest, and the level and type of access to models. We also find that most existing regulations and frameworks lack guidance on rigorous reporting. In response to these shortcomings, we argue that audit cards can provide a structured format for reporting key claims alongside their justifications, enhancing transparency, facilitating proper interpretation, and establishing trust in reporting.
Problem

Research questions and friction points this paper is trying to address.

Lack of context in AI audit evaluations
Inconsistent reporting of key audit details
Need for standardized audit reporting framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes audit cards for contextualizing AI evaluations
Identifies six key contextual features for audits
Enhances transparency and trust in AI reporting
🔎 Similar Papers
No similar papers found.