ForesightSafety Bench: A Frontier Risk Evaluation and Governance Framework towards Safe AI

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical gap in existing AI safety evaluation frameworks, which struggle to encompass the unpredictable, hard-to-control, and potentially irreversible systemic risks introduced by frontier models. We propose the first multi-layered safety assessment framework that integrates both foundational and frontier risk dimensions, structured around seven core safety pillars and 94 granular sub-dimensions, with explicit coverage of high-risk emerging scenarios such as autonomous agents, AI for Science (AI4Science), and embodied intelligence. By combining structured risk data collection, multidimensional safety capability evaluation, and behavioral boundary analysis of large language models, we establish a scalable technical foundation for AI safety governance. Systematic evaluations of over 20 leading large models have yielded tens of thousands of structured risk records, revealing widespread vulnerabilities across key safety dimensions and providing empirical evidence and open-source tools to support AI alignment and regulatory efforts.

Technology Category

Application Category

📝 Abstract
Rapidly evolving AI exhibits increasingly strong autonomy and goal-directed capabilities, accompanied by derivative systemic risks that are more unpredictable, difficult to control, and potentially irreversible. However, current AI safety evaluation systems suffer from critical limitations such as restricted risk dimensions and failed frontier risk detection. The lagging safety benchmarks and alignment technologies can hardly address the complex challenges posed by cutting-edge AI models. To bridge this gap, we propose the"ForesightSafety Bench"AI Safety Evaluation Framework, beginning with 7 major Fundamental Safety pillars and progressively extends to advanced Embodied AI Safety, AI4Science Safety, Social and Environmental AI risks, Catastrophic and Existential Risks, as well as 8 critical industrial safety domains, forming a total of 94 refined risk dimensions. To date, the benchmark has accumulated tens of thousands of structured risk data points and assessment results, establishing a widely encompassing, hierarchically clear, and dynamically evolving AI safety evaluation framework. Based on this benchmark, we conduct systematic evaluation and in-depth analysis of over twenty mainstream advanced large models, identifying key risk patterns and their capability boundaries. The safety capability evaluation results reveals the widespread safety vulnerabilities of frontier AI across multiple pillars, particularly focusing on Risky Agentic Autonomy, AI4Science Safety, Embodied AI Safety, Social AI Safety and Catastrophic and Existential Risks. Our benchmark is released at https://github.com/Beijing-AISI/ForesightSafety-Bench. The project website is available at https://foresightsafety-bench.beijing-aisi.ac.cn/.
Problem

Research questions and friction points this paper is trying to address.

AI safety
frontier risks
systemic risks
safety evaluation
autonomous AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

ForesightSafety Bench
AI safety evaluation
frontier AI risks
structured risk benchmark
agentic autonomy
🔎 Similar Papers
No similar papers found.
H
Haibo Tong
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; University of Chinese Academy of Sciences, China.
F
Feifei Zhao
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; Long-term AI, China.
L
Linghao Feng
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; University of Chinese Academy of Sciences, China.
R
Ruoyu Wu
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.
R
Ruolin Chen
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; University of Chinese Academy of Sciences, China.
L
Lu Jia
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.
Zhou Zhao
Zhou Zhao
Zhejiang University
Machine LearningData MiningMultimedia Computing
Jindong Li
Jindong Li
Institute of Automation, Chinese Academy of Sciences
domain specific architecturefpga acceleratorlarge language modelspiking neural network
Tenglong Li
Tenglong Li
Institute of Automation, Chinese Academy of Sciences
Hardware Architecture
E
Erliang Lin
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; University of Chinese Academy of Sciences, China.
S
Shuai Yang
Beijing Institute of AI Safety and Governance, China.
E
Enmeng Lu
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; Long-term AI, China.
Y
Yinqian Sun
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; Long-term AI, China.
Q
Qian Zhang
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; Long-term AI, China.
Z
Zizhe Ruan
Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; Long-term AI, China.
Z
Zeyang Yue
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.
P
Ping Wu
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.; University of Chinese Academy of Sciences, China.
H
Huangrui Li
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.
C
Chengyi Sun
Beijing Institute of AI Safety and Governance, China.; Beijing Key Laboratory of Safe AI and Superalignment, China.; BrainCog Lab, Institute of Automation, Chinese Academy of Sciences, China.
Yi Zeng
Yi Zeng
Institute of Automation, Chinese Academy of Sciences
Brain-inspired AIAI SafetyAI Ethics and Governance