The User-first Approach to AI Ethics: Preferences for Ethical Principles in AI Systems across Cultures and Contexts

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Operationalizing AI ethics principles has long overlooked users’ authentic preferences, particularly lacking cross-cultural empirical validation. Method: We conducted a discrete choice experiment across China, Germany, Japan, and the United States (N = 4,216), quantifying user prioritization of 11 AI ethics principles and identifying four distinct user segments via latent class analysis—most notably a “regulation-dependent” majority (38%) exhibiting low subjective concern for ethical issues. Results: Privacy, fairness, and transparency emerged as globally high-priority principles; cultural background and application context significantly moderated ethical weightings. This study introduces the first empirically grounded, cross-cultural, and context-sensitive AI ethics adaptation framework, enabling user-centered ethical design, differentiated regulatory policies, and future empirical research.

Technology Category

Application Category

📝 Abstract
As AI systems increasingly permeate everyday life, designers and developers face mounting pressure to balance innovation with ethical design choices. To date, the operationalisation of AI ethics has predominantly depended on frameworks that prescribe which ethical principles should be embedded within AI systems. However, the extent to which users value these principles remains largely unexplored in the existing literature. In a discrete choice experiment conducted in four countries, we quantify user preferences for 11 ethical principles. Our findings indicate that, while users generally prioritise privacy, justice & fairness, and transparency, their preferences exhibit significant variation based on culture and application context. Latent class analysis further revealed four distinct user cohorts, the largest of which is ethically disengaged and defers to regulatory oversight. Our findings offer (1) empirical evidence of uneven user prioritisation of AI ethics principles, (2) actionable guidance for operationalising ethics tailored to culture and context, (3) support for the development of robust regulatory mechanisms, and (4) a foundation for advancing a user-centred approach to AI ethics, motivated independently from abstract moral theory.
Problem

Research questions and friction points this paper is trying to address.

Assessing user preferences for ethical principles in AI systems
Exploring cultural and contextual variations in AI ethics priorities
Identifying gaps between prescribed frameworks and user values
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantify user preferences for ethical principles
Tailor ethics to culture and context
Identify ethically disengaged user cohorts
🔎 Similar Papers
No similar papers found.
B
Benjamin J. Carroll
University of Technology Sydney, Australia
Jianlong Zhou
Jianlong Zhou
University of Technology Sydney (UTS)
AI EthicsAI FairnessAI ExplainabilityHuman Centred AIHuman Computer Interaction
P
Paul F. Burke
University of Technology Sydney, Australia
Sabine Ammon
Sabine Ammon
Technische Universität Berlin