🤖 AI Summary
Addressing security and regulatory compliance challenges in AI services under the implementation of China’s Artificial Intelligence Basic Law and evolving global AI governance trends.
Method: We propose a responsible AI risk governance framework tailored to the Chinese context, featuring a localized four-dimensional AI risk taxonomy; integrating regulatory compliance assessment with verifiable evaluation methods for model safety and robustness; and developing SafetyGuard—a real-time content protection tool for dynamic interception of harmful responses.
Contribution/Results: This work constitutes the first systematic integration of regulatory alignment, end-to-end lifecycle risk identification, verifiable assessment, and engineering-grade protection, establishing a closed-loop “classification–assessment–protection–verification” paradigm. The proposed assessment methodology and SafetyGuard have been deployed in production AI services, yielding measurable improvements in threat response latency and compliance efficiency. Our framework provides a reusable, empirically validated methodology and technical infrastructure for advancing China’s AI safety ecosystem.
📝 Abstract
KT developed a Responsible AI (RAI) assessment methodology and risk mitigation technologies to ensure the safety and reliability of AI services. By analyzing the Basic Act on AI implementation and global AI governance trends, we established a unique approach for regulatory compliance and systematically identify and manage all potential risk factors from AI development to operation. We present a reliable assessment methodology that systematically verifies model safety and robustness based on KT's AI risk taxonomy tailored to the domestic environment. We also provide practical tools for managing and mitigating identified AI risks. With the release of this report, we also release proprietary Guardrail : SafetyGuard that blocks harmful responses from AI models in real-time, supporting the enhancement of safety in the domestic AI development ecosystem. We also believe these research outcomes provide valuable insights for organizations seeking to develop Responsible AI.