Do AI Companies Make Good on Voluntary Commitments to the White House?

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates compliance by 16 major AI companies with the White House’s eight voluntary AI commitments. We introduce the first publicly grounded, quantitative assessment framework, employing a multidimensional scoring rubric and mixed-method coding of transparency disclosures to enable cross-firm comparison. Results reveal an overall average compliance rate of just 52% (OpenAI leads at 83%), with the “model weight security” commitment scoring a mere 17% on average—11 firms received zero points—highlighting systemic noncompliance. Our contributions are threefold: (1) a reproducible, empirically verifiable paradigm for AI governance evaluation; (2) identification of three critical governance bottlenecks—vague commitment language, opaque supply chains, and absent disclosure mechanisms; and (3) concrete policy recommendations, including mandating verifiable disclosures within regulatory frameworks, to transition AI governance from voluntary pledges toward accountable, trust-based oversight.

Technology Category

Application Category

📝 Abstract
Voluntary commitments are central to international AI governance, as demonstrated by recent voluntary guidelines from the White House to the G7, from Bletchley Park to Seoul. How do major AI companies make good on their commitments? We score companies based on their publicly disclosed behavior by developing a detailed rubric based on their eight voluntary commitments to the White House in 2023. We find significant heterogeneity: while the highest-scoring company (OpenAI) scores a 83% overall on our rubric, the average score across all companies is just 52%. The companies demonstrate systemically poor performance for their commitment to model weight security with an average score of 17%: 11 of the 16 companies receive 0% for this commitment. Our analysis highlights a clear structural shortcoming that future AI governance initiatives should correct: when companies make public commitments, they should proactively disclose how they meet their commitments to provide accountability, and these disclosures should be verifiable. To advance policymaking on corporate AI governance, we provide three directed recommendations that address underspecified commitments, the role of complex AI supply chains, and public transparency that could be applied towards AI governance initiatives worldwide.
Problem

Research questions and friction points this paper is trying to address.

Assessing AI companies' adherence to voluntary White House commitments
Evaluating transparency and accountability in AI governance practices
Identifying shortcomings in model weight security commitments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scoring companies based on disclosed behavior
Developing rubric for voluntary commitments
Recommending verifiable public disclosures
🔎 Similar Papers
No similar papers found.