Understanding AI Trustworthiness: A Scoping Review of AIES & FAccT Articles

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI trustworthiness research overemphasizes technical attributes—such as robustness and fairness—while neglecting socio-technical dimensions, resulting in conceptual-practical misalignment. This study conducts a systematic scoping review of papers published at AIES and FAccT conferences (2018–2023), analyzing conceptual models, measurement instruments, validation methodologies, and application contexts of trustworthiness. Results reveal a prevailing technocentric paradigm: social, cultural, and institutional factors are largely absent; power dynamics and value conflicts remain inadequately modeled. In response, we propose the first interdisciplinary analytical framework integrating socio-technical perspectives, foregrounding the situatedness, dynamism, and negotiability of trust. We advocate incorporating institutional design, participatory validation, and pluralistic value alignment into the AI trustworthiness research agenda—thereby advancing both technical rigor and social legitimacy in trustworthy AI development.

Technology Category

Application Category

📝 Abstract
Background: Trustworthy AI serves as a foundational pillar for two major AI ethics conferences: AIES and FAccT. However, current research often adopts techno-centric approaches, focusing primarily on technical attributes such as reliability, robustness, and fairness, while overlooking the sociotechnical dimensions critical to understanding AI trustworthiness in real-world contexts. Objectives: This scoping review aims to examine how the AIES and FAccT communities conceptualize, measure, and validate AI trustworthiness, identifying major gaps and opportunities for advancing a holistic understanding of trustworthy AI systems. Methods: We conduct a scoping review of AIES and FAccT conference proceedings to date, systematically analyzing how trustworthiness is defined, operationalized, and applied across different research domains. Our analysis focuses on conceptualization approaches, measurement methods, verification and validation techniques, application areas, and underlying values. Results: While significant progress has been made in defining technical attributes such as transparency, accountability, and robustness, our findings reveal critical gaps. Current research often predominantly emphasizes technical precision at the expense of social and ethical considerations. The sociotechnical nature of AI systems remains less explored and trustworthiness emerges as a contested concept shaped by those with the power to define it. Conclusions: An interdisciplinary approach combining technical rigor with social, cultural, and institutional considerations is essential for advancing trustworthy AI. We propose actionable measures for the AI ethics community to adopt holistic frameworks that genuinely address the complex interplay between AI systems and society, ultimately promoting responsible technological development that benefits all stakeholders.
Problem

Research questions and friction points this paper is trying to address.

Examines how AIES and FAccT communities conceptualize AI trustworthiness
Identifies gaps in sociotechnical dimensions of trustworthy AI systems
Proposes holistic frameworks combining technical and social considerations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conducting scoping review of AIES and FAccT proceedings
Analyzing conceptualization and measurement of AI trustworthiness
Proposing interdisciplinary sociotechnical frameworks for trustworthy AI
🔎 Similar Papers
No similar papers found.