🤖 AI Summary
This study identifies a significant coverage gap and collaboration deficit between first-party (developer-led) and third-party (academic, NGO, etc.) AI social impact assessments—spanning bias, fairness, privacy, environmental cost, and labor practices. Methodologically, it conducts the first systematic comparative analysis of 186 first-party reports and 183 third-party evaluations, integrating content analysis, quantitative statistics, and in-depth interviews with AI developers. Results reveal persistent under-disclosure by first parties on critical issues, while third-party assessments—though deeper—remain constrained by data opacity and lack of access to proprietary infrastructure and internal documentation. The core contribution is the empirical identification of structural imbalances within the AI assessment ecosystem. The paper proposes a shared infrastructure framework to integrate independent evaluations, enhance verifiability, and strengthen accountability—thereby establishing a methodological foundation and actionable policy pathway for robust AI governance.
📝 Abstract
Foundation models are increasingly central to high-stakes AI systems, and governance frameworks now depend on evaluations to assess their risks and capabilities. Although general capability evaluations are widespread, social impact assessments covering bias, fairness, privacy, environmental costs, and labor practices remain uneven across the AI ecosystem. To characterize this landscape, we conduct the first comprehensive analysis of both first-party and third-party social impact evaluation reporting across a wide range of model developers. Our study examines 186 first-party release reports and 183 post-release evaluation sources, and complements this quantitative analysis with interviews of model developers. We find a clear division of evaluation labor: first-party reporting is sparse, often superficial, and has declined over time in key areas such as environmental impact and bias, while third-party evaluators including academic researchers, nonprofits, and independent organizations provide broader and more rigorous coverage of bias, harmful content, and performance disparities. However, this complementarity has limits. Only model developers can authoritatively report on data provenance, content moderation labor, financial costs, and training infrastructure, yet interviews reveal that these disclosures are often deprioritized unless tied to product adoption or regulatory compliance. Our findings indicate that current evaluation practices leave major gaps in assessing AI's societal impacts, highlighting the urgent need for policies that promote developer transparency, strengthen independent evaluation ecosystems, and create shared infrastructure to aggregate and compare third-party evaluations in a consistent and accessible way.