🤖 AI Summary
Existing AI openness assessments predominantly focus on the availability of data, models, and code, yet neglect whether such openness meaningfully advances democratization, autonomy, and other intended socio-technical outcomes. This paper identifies the root cause as a disconnection from real-world release contexts and a lack of systematic analysis of reuse stakeholders’ identities, purposes, and constraints. Methodologically, it innovatively adapts five core lessons from systems security—emphasizing context-awareness, failure modes, layered defenses, human factors, and emergent properties—and integrates them into openness evaluation for the first time. The resulting socio-technical framework reveals fundamental limitations of conventional metrics in resilience and fairness. It proposes a novel paradigm centered on ecosystem integrity, risk-informed contextualization, and outcome-oriented efficacy—providing both theoretical grounding and actionable guidance for designing substantively open AI systems.
📝 Abstract
Most frameworks for assessing the openness of AI systems use narrow criteria such as availability of data, model, code, documentation, and licensing terms. However, to evaluate whether the intended effects of openness - such as democratization and autonomy - are realized, we need a more holistic approach that considers the context of release: who will reuse the system, for what purposes, and under what conditions. To this end, we adapt five lessons from system safety that offer guidance on how openness can be evaluated at the system level.