The Singapore Consensus on Global AI Safety Research Priorities

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How can a globally coordinated AI safety research ecosystem be established to balance innovation incentives with risk mitigation? This project proposes a “defense-in-depth” framework, introducing a three-tiered classification system for AI safety research—covering AI system development, risk assessment, and post-deployment governance. Through a systematic literature review and collaborative analysis of AI policies and research capacities across 33 countries, it synthesizes diverse perspectives to produce the first internationally consensual AI safety research roadmap. Its key contributions are: (1) the first structured, cross-layer and cross-jurisdictional taxonomy of AI safety research, integrating technical governance with multilateral coordination mechanisms; and (2) actionable outputs directly informing national AI safety policy formulation and multinational research agendas, thereby providing both a methodological foundation and practical guidance for building trustworthy AI ecosystems. (149 words)

Technology Category

Application Category

📝 Abstract
Rapidly improving AI capabilities and autonomy hold significant promise of transformation, but are also driving vigorous debate on how to ensure that AI is safe, i.e., trustworthy, reliable, and secure. Building a trusted ecosystem is therefore essential -- it helps people embrace AI with confidence and gives maximal space for innovation while avoiding backlash. The "2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety" aimed to support research in this space by bringing together AI scientists across geographies to identify and synthesise research priorities in AI safety. This resulting report builds on the International AI Safety Report chaired by Yoshua Bengio and backed by 33 governments. By adopting a defence-in-depth model, this report organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment), and challenges with monitoring and intervening after deployment (Control).
Problem

Research questions and friction points this paper is trying to address.

Ensuring AI is safe, trustworthy, reliable, and secure
Identifying research priorities for AI safety globally
Addressing challenges in development, assessment, and control of AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defence-in-depth model for AI safety
Three research domains: Development, Assessment, Control
International collaboration for AI safety priorities