🤖 AI Summary
AI safety vulnerabilities—including algorithmic bias and adversarial fragility—fuel misinformation, inequity, security threats, and eroded public trust, exposing critical gaps in existing governance frameworks. To address this, we propose a unified three-dimensional governance framework: “intrinsic safety,” “derivative safety,” and “socio-ethical alignment.” This is the first approach to holistically integrate technical defenses (e.g., robustness enhancement, fairness-aware modeling, adversarial detection), real-world risk assessment (leveraging emerging evaluation benchmarks), and cross-disciplinary policy coordination. Through a systematic review of over 300 studies, we identify three core challenges: generalization gaps, insufficient evaluation rigor, and regulatory fragmentation—arguing for proactive, lifecycle-integrated governance rather than reactive remediation. Our work delivers a tripartite output: actionable technical guidelines, standardized evaluation metrics, evidence-based policy recommendations, and an open-source tool suite to advance global trustworthy AI ecosystems.
📝 Abstract
The rapid advancement of AI has expanded its capabilities across domains, yet introduced critical technical vulnerabilities, such as algorithmic bias and adversarial sensitivity, that pose significant societal risks, including misinformation, inequity, security breaches, physical harm, and eroded public trust. These challenges highlight the urgent need for robust AI governance. We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security (system reliability), Derivative Security (real-world harm mitigation), and Social Ethics (value alignment and accountability). Uniquely, our approach unifies technical methods, emerging evaluation benchmarks, and policy insights to promote transparency, accountability, and trust in AI systems. Through a systematic review of over 300 studies, we identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight. These shortcomings stem from treating governance as an afterthought, rather than a foundational design principle, resulting in reactive, siloed efforts that fail to address the interdependence of technical integrity and societal trust. To overcome this, we present an integrated research agenda that bridges technical rigor with social responsibility. Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy. The accompanying repository is available at https://github.com/ZTianle/Awesome-AI-SG.