🤖 AI Summary
Prior to large-scale AGI deployment, misuse and goal misalignment represent two critical safety risks. Method: We propose a dual-track collaborative defense framework: (1) at the model level, integrating amplified supervision, robust training, interpretability analysis, and uncertainty modeling; and (2) at the system level, implementing multi-tiered access control and real-time behavioral monitoring. Contribution/Results: This work is the first to systematically categorize and prioritize four risk types—misuse, misalignment, mistakes, and structural flaws—focusing explicitly on the former two. It introduces the first verifiable safety case framework for AGI, treating interpretability and uncertainty estimation as proactive enablers of safety assurance. The resulting end-to-end methodology comprehensively covers capability identification, safety hardening, dynamic monitoring, and failure containment—thereby enabling high-assurance, auditable, and formally verifiable AGI safety engineering.
📝 Abstract
Artificial General Intelligence (AGI) promises transformative benefits but also presents significant risks. We develop an approach to address the risk of harms consequential enough to significantly harm humanity. We identify four areas of risk: misuse, misalignment, mistakes, and structural risks. Of these, we focus on technical approaches to misuse and misalignment. For misuse, our strategy aims to prevent threat actors from accessing dangerous capabilities, by proactively identifying dangerous capabilities, and implementing robust security, access restrictions, monitoring, and model safety mitigations. To address misalignment, we outline two lines of defense. First, model-level mitigations such as amplified oversight and robust training can help to build an aligned model. Second, system-level security measures such as monitoring and access control can mitigate harm even if the model is misaligned. Techniques from interpretability, uncertainty estimation, and safer design patterns can enhance the effectiveness of these mitigations. Finally, we briefly outline how these ingredients could be combined to produce safety cases for AGI systems.