🤖 AI Summary
This study addresses two core challenges in advanced artificial general intelligence (AGI): the risk of malicious misuse—particularly in dual-use domains such as biological weapons development—and insufficient system reliability. Methodologically, it proposes an integrated framework coupling technical safeguards with adaptive governance, incorporating adversarial training, high-fidelity data curation, real-time monitoring systems, and multi-tiered risk assessment models, alongside an institutional-level safety governance architecture. Key contributions include: (1) advancing standardization of AGI safety frameworks; (2) enabling deployment of enhanced protective measures by three leading AI developers; (3) doubling the number of enterprise-level AI safety frameworks published industry-wide; and (4) informing national AI governance policies across multiple jurisdictions—centered on transparency mandates and dynamic risk evaluation. Collectively, these outcomes mark a paradigm shift in AGI risk management: from isolated technical countermeasures toward systemic, institutionalized, and globally coordinated governance.
📝 Abstract
This second update to the 2025 International AI Safety Report assesses new developments in general-purpose AI risk management over the past year. It examines how researchers, public institutions, and AI developers are approaching risk management for general-purpose AI. In recent months, for example, three leading AI developers applied enhanced safeguards to their new models, as their internal pre-deployment testing could not rule out the possibility that these models could be misused to help create biological weapons. Beyond specific precautionary measures, there have been a range of other advances in techniques for making AI models and systems more reliable and resistant to misuse. These include new approaches in adversarial training, data curation, and monitoring systems. In parallel, institutional frameworks that operationalise and formalise these technical capabilities are starting to emerge: the number of companies publishing Frontier AI Safety Frameworks more than doubled in 2025, and governments and international organisations have established a small number of governance frameworks for general-purpose AI, focusing largely on transparency and risk assessment.