🤖 AI Summary
The widespread deployment of General-Purpose Artificial Intelligence (GPAI) systems introduces novel security risks, yet existing vulnerability disclosure infrastructure remains underdeveloped—lacking standardized practices, governance frameworks, and institutional coordination, especially compared to mature domains like software security.
Method: This paper proposes a robust, third-party–driven, cross-organizational vulnerability reporting ecosystem for GPAI. It introduces the first standardized AI defect reporting template and code of conduct; designs a legally protected, broadly scoped GPAI vulnerability bounty program; and establishes a multi-stakeholder coordinated disclosure infrastructure. The approach integrates security engineering, machine learning, law, and policy design to align with AI system characteristics.
Contribution/Results: The proposed three-tier intervention framework significantly improves defect reproducibility, response latency, and cross-vendor vulnerability containment. It delivers a practical, scalable pathway toward secure, transparent, and accountable AI governance.
📝 Abstract
The widespread deployment of general-purpose AI (GPAI) systems introduces significant new risks. Yet the infrastructure, practices, and norms for reporting flaws in GPAI systems remain seriously underdeveloped, lagging far behind more established fields like software security. Based on a collaboration between experts from the fields of software security, machine learning, law, social science, and policy, we identify key gaps in the evaluation and reporting of flaws in GPAI systems. We call for three interventions to advance system safety. First, we propose using standardized AI flaw reports and rules of engagement for researchers in order to ease the process of submitting, reproducing, and triaging flaws in GPAI systems. Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs, borrowing from bug bounties, with legal safe harbors to protect researchers. Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports across the many stakeholders who may be impacted. These interventions are increasingly urgent, as evidenced by the prevalence of jailbreaks and other flaws that can transfer across different providers' GPAI systems. By promoting robust reporting and coordination in the AI ecosystem, these proposals could significantly improve the safety, security, and accountability of GPAI systems.