Responsible AI Technical Report

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing security and regulatory compliance challenges in AI services under the implementation of China’s Artificial Intelligence Basic Law and evolving global AI governance trends. Method: We propose a responsible AI risk governance framework tailored to the Chinese context, featuring a localized four-dimensional AI risk taxonomy; integrating regulatory compliance assessment with verifiable evaluation methods for model safety and robustness; and developing SafetyGuard—a real-time content protection tool for dynamic interception of harmful responses. Contribution/Results: This work constitutes the first systematic integration of regulatory alignment, end-to-end lifecycle risk identification, verifiable assessment, and engineering-grade protection, establishing a closed-loop “classification–assessment–protection–verification” paradigm. The proposed assessment methodology and SafetyGuard have been deployed in production AI services, yielding measurable improvements in threat response latency and compliance efficiency. Our framework provides a reusable, empirically validated methodology and technical infrastructure for advancing China’s AI safety ecosystem.

Technology Category

Application Category

📝 Abstract
KT developed a Responsible AI (RAI) assessment methodology and risk mitigation technologies to ensure the safety and reliability of AI services. By analyzing the Basic Act on AI implementation and global AI governance trends, we established a unique approach for regulatory compliance and systematically identify and manage all potential risk factors from AI development to operation. We present a reliable assessment methodology that systematically verifies model safety and robustness based on KT's AI risk taxonomy tailored to the domestic environment. We also provide practical tools for managing and mitigating identified AI risks. With the release of this report, we also release proprietary Guardrail : SafetyGuard that blocks harmful responses from AI models in real-time, supporting the enhancement of safety in the domestic AI development ecosystem. We also believe these research outcomes provide valuable insights for organizations seeking to develop Responsible AI.
Problem

Research questions and friction points this paper is trying to address.

Ensuring AI service safety and reliability
Systematically identifying and managing AI risks
Providing real-time harmful response mitigation tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed RAI assessment methodology and risk mitigation technologies
Established unique regulatory compliance approach for risk management
Released proprietary SafetyGuard tool for real-time response blocking
🔎 Similar Papers
No similar papers found.
Soonmin Bae
Soonmin Bae
NAVER Clova
Artificial IntelligenceComputer VisionComputer Graphics
W
Wanjin Park
Responsible AI Center, KT
J
Jeongyeop Kim
Responsible AI Center, KT
Y
Yunjin Park
Responsible AI Center, KT
Jungwon Yoon
Jungwon Yoon
Professor, Gwangju Institute of Science and Technology
Rehabilitation roboticsMagnetic Particle ImagingNano Robotic NavigationExoskeletonVR-based Automation
J
Junhyung Moon
Responsible AI Center, KT
M
Myunggyo Oh
Responsible AI Center, KT
W
Wonhyuk Lee
Responsible AI Center, KT
J
Junseo Jang
Responsible AI Center, KT
D
Dongyoung Jung
Responsible AI Center, KT
M
Minwook Ju
Responsible AI Center, KT
E
Eunmi Kim
Responsible AI Center, KT
S
Sujin Kim
Responsible AI Center, KT
Y
Youngchol Kim
Responsible AI Center, KT
S
Somin Lee
Responsible AI Center, KT
W
Wonyoung Lee
Responsible AI Center, KT
M
Minsung Noh
Responsible AI Center, KT
H
Hyoungjun Park
Responsible AI Center, KT
E
Eunyoung Shin
Responsible AI Center, KT