🤖 AI Summary
This study addresses the persistent ambiguity in current AI regulations regarding the definitions of “AI model” and “AI system,” which undermines clear accountability and complicates the allocation of legal obligations. Through a systematic review of 896 academic publications and over 80 policy and standards documents, combined with conceptual lineage tracing and case studies, the work establishes a precise conceptual boundary: an AI model refers specifically to the core machine learning component—including its parameters and architecture—whereas an AI system encompasses the model together with its input/output interfaces and other functional components. The proposed definitions exhibit both theoretical rigor and regulatory applicability, effectively resolving ambiguities inherited from the OECD framework and demonstrating strong explanatory power and practical utility in delineating responsibilities across the AI value chain, as validated through multiple real-world incidents.
📝 Abstract
Emerging AI regulations assign distinct obligations to different actors along the AI value chain (e.g., the EU AI Act distinguishes providers and deployers for both AI models and AI systems), yet the foundational terms"AI model"and"AI system"lack clear, consistent definitions. Through a systematic review of 896 academic papers and a manual review of over 80 regulatory, standards, and technical or policy documents, we analyze existing definitions from multiple conceptual perspectives. We then trace definitional lineages and paradigm shifts over time, finding that most standards and regulatory definitions derive from the OECD's frameworks, which evolved in ways that compounded rather than resolved conceptual ambiguities. The ambiguity of the boundary between an AI model and an AI system creates practical difficulties in determining obligations for different actors, and raises questions on whether certain modifications performed are specific to the model as opposed to the non-model system components. We propose conceptual definitions grounded in the nature of models and systems and the relationship between them, then develop operational definitions for contemporary neural network-based machine-learning AI: models consist of trained parameters and architecture, while systems consist of the model plus additional components including an interface for processing inputs and outputs. Finally, we discuss implications for regulatory implementation and examine how our definitions contribute to resolving ambiguities in allocating responsibilities across the AI value chain, in both theoretical scenarios and case studies involving real-world incidents.