🤖 AI Summary
Increasing autonomy in frontier AI systems poses large-scale systemic societal risks, yet current AI development lacks transparency in safety measures, testing protocols, and governance—undermining verifiability of safety claims and enforceability of accountability. Method: We propose the first cross-domain, verifiable safety accountability framework, drawing rigorously on regulatory and engineering practices from nuclear energy, aviation software, and medical device industries. It introduces mandatory safety documentation standards and a layered accountability architecture, grounded in formal responsibility modeling, cross-sectoral compliance mapping, and risk-based attribution analysis. Contribution/Results: The framework yields an actionable safety disclosure template and a principled responsibility attribution guideline. It provides regulators and AI labs with institutionalized tools to enhance trustworthiness, auditability, and accountability in high-risk AI development—thereby enabling rigorous, evidence-based oversight and liability assignment.
📝 Abstract
As artificial intelligence systems grow more capable and autonomous, frontier AI development poses potential systemic risks that could affect society at a massive scale. Current practices at many AI labs developing these systems lack sufficient transparency around safety measures, testing procedures, and governance structures. This opacity makes it challenging to verify safety claims or establish appropriate liability when harm occurs. Drawing on liability frameworks from nuclear energy, aviation software, and healthcare, we propose a comprehensive approach to safety documentation and accountability in frontier AI development.