🤖 AI Summary
This study addresses socio-technical barriers to deploying trustworthy AI in safety-critical domains—such as air traffic control—where current AI system designs overlook implicit, practice-based trust formation mechanisms embedded in frontline practitioners’ everyday tool interactions. Method: Employing ethnographic fieldwork—including sustained observation and situated practice analysis—the study systematically examines trust dynamics between air traffic controllers and existing operational tools, identifying key determinants of AI trustworthiness: contextual adaptability, explainability, controllability, and organizational support. Contribution/Results: Moving beyond abstract principles, the research reconceptualizes “trust” as an actionable, workflow-embedded design dimension. It proposes a practice-informed trustworthy AI design framework that integrates empirical insights from real-world operations, offering concrete guidance for system development and human-AI collaboration in high-risk industries.
📝 Abstract
Exploring the socio-technical challenges confronting the adoption of AI in organisational settings is something that has so far been largely absent from the related literature. In particular, research into requirements for trustworthy AI typically overlooks how people deal with the problems of trust in the tools that they use as part of their everyday work practices. This article presents some findings from an ongoing ethnographic study of how current tools are used in air traffic control work and what it reveals about requirements for trustworthy AI in air traffic control and other safety-critical application domains.