🤖 AI Summary
Current Responsible Artificial Intelligence (RAI) initiatives suffer from a critical design–implementation gap and fragmented ethical standards, with no globally harmonized, actionable governance framework. This study addresses these challenges through a systematic literature review, cross-regional policy analysis—including ISO/IEC standards, NIST AI Risk Management Framework, and the EU AI Act—and empirical case studies from industry practice. It is the first to synthesize the multi-tiered evolution of RAI standards, proposing a novel “design-driven—not implementation-driven” paradigm and uncovering how societal pressure and ethical failures act as catalysts for RAI institutionalization. The research maps the end-to-end RAI lifecycle, identifying six recurrent operational challenges and twelve categories of empirically validated best practices. Collectively, these contributions provide theoretical grounding and evidence-based decision support for public–private collaboration in developing standardized, implementable RAI governance pathways.
📝 Abstract
Responsible Artificial Intelligence (RAI) is a combination of ethics associated with the usage of artificial intelligence aligned with the common and standard frameworks. This survey paper extensively discusses the global and national standards, applications of RAI, current technology and ongoing projects using RAI, and possible challenges in implementing and designing RAI in the industries and projects based on AI. Currently, ethical standards and implementation of RAI are decoupled which caters each industry to follow their own standards to use AI ethically. Many global firms and government organizations are taking necessary initiatives to design a common and standard framework. Social pressure and unethical way of using AI forces the RAI design rather than implementation.