When Openness Fails: Lessons from System Safety for Assessing Openness in AI

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI openness assessments predominantly focus on the availability of data, models, and code, yet neglect whether such openness meaningfully advances democratization, autonomy, and other intended socio-technical outcomes. This paper identifies the root cause as a disconnection from real-world release contexts and a lack of systematic analysis of reuse stakeholders’ identities, purposes, and constraints. Methodologically, it innovatively adapts five core lessons from systems security—emphasizing context-awareness, failure modes, layered defenses, human factors, and emergent properties—and integrates them into openness evaluation for the first time. The resulting socio-technical framework reveals fundamental limitations of conventional metrics in resilience and fairness. It proposes a novel paradigm centered on ecosystem integrity, risk-informed contextualization, and outcome-oriented efficacy—providing both theoretical grounding and actionable guidance for designing substantively open AI systems.

Technology Category

Application Category

📝 Abstract
Most frameworks for assessing the openness of AI systems use narrow criteria such as availability of data, model, code, documentation, and licensing terms. However, to evaluate whether the intended effects of openness - such as democratization and autonomy - are realized, we need a more holistic approach that considers the context of release: who will reuse the system, for what purposes, and under what conditions. To this end, we adapt five lessons from system safety that offer guidance on how openness can be evaluated at the system level.
Problem

Research questions and friction points this paper is trying to address.

Current AI openness frameworks use narrow criteria like data availability
Need holistic approach considering context of system release
Adapt system safety lessons to evaluate openness at system level
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapting system safety principles for AI openness evaluation
Proposing holistic contextual analysis beyond technical artifacts
Focusing on system-level impacts of AI release conditions