🤖 AI Summary
This study identifies systemic documentation uncertainties faced by generative AI developers on open-source platforms (e.g., Hugging Face), including ambiguity in content selection, inconsistent presentation of critical model components, and unclear accountability attribution. Through semi-structured interviews with 13 developers and grounded-theory-driven thematic coding, we empirically derive and validate a novel “Three-Dimensional Model of Documentation Uncertainty”—the first such framework in the GenAI documentation literature. Our contribution includes a collaborative governance framework comprising three actionable recommendations: (1) cultivating community-endorsed documentation standards; (2) establishing shared model evaluation infrastructure; and (3) formalizing multi-stakeholder responsibility delineation mechanisms. The framework has been adopted by the Hugging Face Documentation Working Group. It advances both theoretical understanding and practical implementation of responsible, standardized documentation for generative AI models. (149 words)
📝 Abstract
Model documentation plays a crucial role in promoting transparency and responsible development of AI systems. With the rise of Generative AI (GenAI), open-source platforms have increasingly become hubs for hosting and distributing these models, prompting platforms like Hugging Face to develop dedicated model documentation guidelines that align with responsible AI principles. Despite these growing efforts, there remains a lack of understanding of how developers document their GenAI models on open-source platforms. Through interviews with 13 GenAI developers active on open-source platforms, we provide empirical insights into their documentation practices and challenges. Our analysis reveals that despite existing resources, developers of GenAI models still face multiple layers of uncertainties in their model documentation: (1) uncertainties about what specific content should be included; (2) uncertainties about how to effectively report key components of their models; and (3) uncertainties in deciding who should take responsibilities for various aspects of model documentation. Based on our findings, we discuss the implications for policymakers, open-source platforms, and the research community to support meaningful, effective and actionable model documentation in the GenAI era, including cultivating better community norms, building robust evaluation infrastructures, and clarifying roles and responsibilities.