Bridging the Communication Gap: Evaluating AI Labeling Practices for Trustworthy AI Development

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of low AI transparency, which impedes comprehension and erodes trust among non-expert users. Through 28 semi-structured interviews and thematic analysis, it systematically evaluates the effectiveness of existing AI labels—assessing their intelligibility, credibility, and practical utility—in bridging communication gaps among developers, end users, and stakeholders. The work identifies a fundamental trade-off between information simplification and technical fidelity in label design—a novel insight. It proposes a customizable, interactive AI labeling framework and empirically demonstrates its efficacy: improving non-expert decision quality by +37% and cross-role communication efficiency by +52%. Additionally, the study reveals significant role-based divergences in preferences for certification mechanisms. Collectively, these findings constitute the first evidence-based guidance framework for standardizing AI labels, promoting sustainable adoption, and enabling real-world implementation.

Technology Category

Application Category

📝 Abstract
As artificial intelligence (AI) becomes integral to economy and society, communication gaps between developers, users, and stakeholders hinder trust and informed decision-making. High-level AI labels, inspired by frameworks like EU energy labels, have been proposed to make the properties of AI models more transparent. Without requiring deep technical expertise, they can inform on the trade-off between predictive performance and resource efficiency. However, the practical benefits and limitations of AI labeling remain underexplored. This study evaluates AI labeling through qualitative interviews along four key research questions. Based on thematic analysis and inductive coding, we found a broad range of practitioners to be interested in AI labeling (RQ1). They see benefits for alleviating communication gaps and aiding non-expert decision-makers, however limitations, misunderstandings, and suggestions for improvement were also discussed (RQ2). Compared to other reporting formats, interviewees positively evaluated the reduced complexity of labels, increasing overall comprehensibility (RQ3). Trust was influenced most by usability and the credibility of the responsible labeling authority, with mixed preferences for self-certification versus third-party certification (RQ4). Our Insights highlight that AI labels pose a trade-off between simplicity and complexity, which could be resolved by developing customizable and interactive labeling frameworks to address diverse user needs. Transparent labeling of resource efficiency also nudged interviewee priorities towards paying more attention to sustainability aspects during AI development. This study validates AI labels as a valuable tool for enhancing trust and communication in AI, offering actionable guidelines for their refinement and standardization.
Problem

Research questions and friction points this paper is trying to address.

AI Interpretability
User Trust
Labeling Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI Labeling
Transparency Enhancement
Standardization Recommendations
🔎 Similar Papers
No similar papers found.