🤖 AI Summary
This work addresses the critical gap in existing AI safety evaluation frameworks, which struggle to encompass the unpredictable, hard-to-control, and potentially irreversible systemic risks introduced by frontier models. We propose the first multi-layered safety assessment framework that integrates both foundational and frontier risk dimensions, structured around seven core safety pillars and 94 granular sub-dimensions, with explicit coverage of high-risk emerging scenarios such as autonomous agents, AI for Science (AI4Science), and embodied intelligence. By combining structured risk data collection, multidimensional safety capability evaluation, and behavioral boundary analysis of large language models, we establish a scalable technical foundation for AI safety governance. Systematic evaluations of over 20 leading large models have yielded tens of thousands of structured risk records, revealing widespread vulnerabilities across key safety dimensions and providing empirical evidence and open-source tools to support AI alignment and regulatory efforts.
📝 Abstract
Rapidly evolving AI exhibits increasingly strong autonomy and goal-directed capabilities, accompanied by derivative systemic risks that are more unpredictable, difficult to control, and potentially irreversible. However, current AI safety evaluation systems suffer from critical limitations such as restricted risk dimensions and failed frontier risk detection. The lagging safety benchmarks and alignment technologies can hardly address the complex challenges posed by cutting-edge AI models. To bridge this gap, we propose the"ForesightSafety Bench"AI Safety Evaluation Framework, beginning with 7 major Fundamental Safety pillars and progressively extends to advanced Embodied AI Safety, AI4Science Safety, Social and Environmental AI risks, Catastrophic and Existential Risks, as well as 8 critical industrial safety domains, forming a total of 94 refined risk dimensions. To date, the benchmark has accumulated tens of thousands of structured risk data points and assessment results, establishing a widely encompassing, hierarchically clear, and dynamically evolving AI safety evaluation framework. Based on this benchmark, we conduct systematic evaluation and in-depth analysis of over twenty mainstream advanced large models, identifying key risk patterns and their capability boundaries. The safety capability evaluation results reveals the widespread safety vulnerabilities of frontier AI across multiple pillars, particularly focusing on Risky Agentic Autonomy, AI4Science Safety, Embodied AI Safety, Social AI Safety and Catastrophic and Existential Risks. Our benchmark is released at https://github.com/Beijing-AISI/ForesightSafety-Bench. The project website is available at https://foresightsafety-bench.beijing-aisi.ac.cn/.