Beyond Release: Access Considerations for Generative AI Systems

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the post-deployment inaccessibility of generative AI systems to end users and stakeholders. We propose the first three-dimensional accessibility assessment framework—structured along the dimensions of *resources*, *technical availability*, and *utility*—to move beyond binary release decisions. The framework decomposes “accessibility” into quantifiable, comparable operational variables, systematically identifying infrastructure dependencies, deployment capability bottlenecks, and sociotechnical adoption barriers. Through cross-model comparative analysis (four high-performance language models), risk–benefit trade-off modeling, and intervention feasibility evaluation, we empirically uncover deep, shared constraints across open- and closed-source models along all three dimensions. Our findings establish a multidimensional accessibility benchmark for AI governance and shift system design priorities from *whether to release* to *how to ensure equitable, actionable access*.

Technology Category

Application Category

📝 Abstract
Generative AI release decisions determine whether system components are made available, but release does not address many other elements that change how users and stakeholders are able to engage with a system. Beyond release, access to system components informs potential risks and benefits. Access refers to practical needs, infrastructurally, technically, and societally, in order to use available components in some way. We deconstruct access along three axes: resourcing, technical usability, and utility. Within each category, a set of variables per system component clarify tradeoffs. For example, resourcing requires access to computing infrastructure to serve model weights. We also compare the accessibility of four high performance language models, two open-weight and two closed-weight, showing similar considerations for all based instead on access variables. Access variables set the foundation for being able to scale or increase access to users; we examine the scale of access and how scale affects ability to manage and intervene on risks. This framework better encompasses the landscape and risk-benefit tradeoffs of system releases to inform system release decisions, research, and policy.
Problem

Research questions and friction points this paper is trying to address.

Analyze access variables in generative AI systems
Compare accessibility of open and closed-weight models
Examine scale of access and risk management
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deconstruct access along three axes.
Compare accessibility of language models.
Examine scale and risk management.
🔎 Similar Papers
No similar papers found.