🤖 AI Summary
This study evaluates compliance by 16 major AI companies with the White House’s eight voluntary AI commitments. We introduce the first publicly grounded, quantitative assessment framework, employing a multidimensional scoring rubric and mixed-method coding of transparency disclosures to enable cross-firm comparison. Results reveal an overall average compliance rate of just 52% (OpenAI leads at 83%), with the “model weight security” commitment scoring a mere 17% on average—11 firms received zero points—highlighting systemic noncompliance. Our contributions are threefold: (1) a reproducible, empirically verifiable paradigm for AI governance evaluation; (2) identification of three critical governance bottlenecks—vague commitment language, opaque supply chains, and absent disclosure mechanisms; and (3) concrete policy recommendations, including mandating verifiable disclosures within regulatory frameworks, to transition AI governance from voluntary pledges toward accountable, trust-based oversight.
📝 Abstract
Voluntary commitments are central to international AI governance, as demonstrated by recent voluntary guidelines from the White House to the G7, from Bletchley Park to Seoul. How do major AI companies make good on their commitments? We score companies based on their publicly disclosed behavior by developing a detailed rubric based on their eight voluntary commitments to the White House in 2023. We find significant heterogeneity: while the highest-scoring company (OpenAI) scores a 83% overall on our rubric, the average score across all companies is just 52%. The companies demonstrate systemically poor performance for their commitment to model weight security with an average score of 17%: 11 of the 16 companies receive 0% for this commitment. Our analysis highlights a clear structural shortcoming that future AI governance initiatives should correct: when companies make public commitments, they should proactively disclose how they meet their commitments to provide accountability, and these disclosures should be verifiable. To advance policymaking on corporate AI governance, we provide three directed recommendations that address underspecified commitments, the role of complex AI supply chains, and public transparency that could be applied towards AI governance initiatives worldwide.