🤖 AI Summary
This study investigates how policy design—specifically policy length and stated purpose—affects the quality of machine learning explanations, addressing a critical gap in AI governance where policy formulation often diverges from practical implementation. Method: A controlled classroom experiment with 124 developers was conducted, integrating behavioral compliance analysis and mixed-methods evaluation of explanation outputs. Contribution/Results: Results reveal that policy length significantly influences developer engagement and partial compliance rates, whereas articulating policy purpose yields no statistically significant improvement in compliance or explanation quality. Explanations consistently exhibit low transparency and lack multi-stakeholder perspectives. The study uncovers a fundamental misalignment between policy form and functional outcomes in AI governance, and proposes an evidence-based optimization pathway: “concise policies coupled with multi-dimensional quality assessment.” These findings provide foundational empirical support and actionable design insights for developing implementable, explanation-oriented governance frameworks.
📝 Abstract
Machine learning models are often criticized as opaque from a lack of transparency in their decision-making process. This study examines how policy design impacts the quality of explanations in ML models. We conducted a classroom experiment with 124 participants and analyzed the effects of policy length and purpose on developer compliance with policy requirements. Our results indicate that while policy length affects engagement with some requirements, policy purpose has no effect, and explanation quality is generally poor. These findings highlight the challenge of effective policy development and the importance of addressing diverse stakeholder perspectives within explanations.