Blueprints of Trust: AI System Cards for End to End Transparency and Governance

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient transparency, ambiguous accountability, and fragmented risk traceability across AI system development and deployment lifecycles, this paper proposes the Hazard-Aware System Card (HASC) framework. HASC introduces the AI Safety Hazard Identifier (ASH ID)—a novel, standardized risk identifier—integrating concepts from model cards and system cards to establish a structured, lifecycle-spanning metadata schema aligned with international standards such as ISO/IEC 42001:2023. By synergizing dynamic safety logging with standardized hazard identification, HASC enables unified risk representation, enhanced end-to-end traceability, and explicit assignment of responsibilities. Empirical evaluation demonstrates that HASC significantly improves risk communication efficiency and compliance decision consistency among developers and regulators across design, assessment, and deployment phases. The framework provides a scalable, verifiable, and standards-compliant infrastructure for AI governance and responsible AI system assurance.

Technology Category

Application Category

📝 Abstract
This paper introduces the Hazard-Aware System Card (HASC), a novel framework designed to enhance transparency and accountability in the development and deployment of AI systems. The HASC builds upon existing model card and system card concepts by integrating a comprehensive, dynamic record of an AI system's security and safety posture. The framework proposes a standardized system of identifiers, including a novel AI Safety Hazard (ASH) ID, to complement existing security identifiers like CVEs, allowing for clear and consistent communication of fixed flaws. By providing a single, accessible source of truth, the HASC empowers developers and stakeholders to make more informed decisions about AI system safety throughout its lifecycle. Ultimately, we also compare our proposed AI system cards with the ISO/IEC 42001:2023 standard and discuss how they can be used to complement each other, providing greater transparency and accountability for AI systems.
Problem

Research questions and friction points this paper is trying to address.

Enhancing AI system transparency and accountability through a novel framework
Integrating dynamic security and safety records into AI system documentation
Standardizing hazard identifiers to complement existing security vulnerability tracking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel Hazard-Aware System Card framework for AI transparency
Integrates dynamic safety and security posture records
Proposes standardized identifiers including AI Safety Hazard ID
🔎 Similar Papers
No similar papers found.
H
Huzaifa Sidhpurwala
Emily Fox
Emily Fox
Department of Computer Science, The University of Texas at Dallas
Algorithmscomputational geometrycombinatorial optimization
G
Garth Mollett
F
Florencio Cano Gabarda
R
Roman Zhukov