🤖 AI Summary
This study investigates public acceptability of AI applications across ten occupational and personal scenarios, along with underlying cognitive mechanisms. Employing a mixed-methods survey with 197 demographically diverse participants—including structured questionnaires, descriptive and cross-tabular analyses, qualitative rationale coding, and association modeling—it identifies, for the first time, the three-way interaction among scenario type (occupational vs. personal), user characteristics (gender, employment status, education level, AI literacy), and reasoning modes (cost–benefit vs. rule-based). Results show significantly lower AI acceptability in occupational contexts than in personal ones; rule-based reasoning is more strongly associated with “unacceptable” judgments; and gender, employment status, and AI literacy serve as critical moderating variables. The findings provide empirically grounded insights for AI governance and inform fine-grained, context-sensitive design strategies.
📝 Abstract
In recent years, there has been a growing recognition of the need to incorporate lay-people's input into the governance and acceptability assessment of AI usage. However, how and why people judge different AI use cases to be acceptable or unacceptable remains under-explored. In this work, we investigate the attitudes and reasons that influence people's judgments about AI's development via a survey administered to demographically diverse participants (N=197). We focus on ten distinct professional (e.g., Lawyer AI) and personal (e.g., Digital Medical Advice AI) AI use cases to understand how characteristics of the use cases and the participants' demographics affect acceptability. We explore the relationships between participants' judgments and their rationales such as reasoning approaches (cost-benefit reasoning vs. rule-based). Our empirical findings reveal number of factors that influence acceptance such as general negative acceptance and higher disagreement of professional usage over personal, significant influence of demographics factors such as gender, employment, and education as well as AI literacy level, and reasoning patterns such as rule-based reasoning being used more when use case is unacceptable. Based on these findings, we discuss the key implications for soliciting acceptability and reasoning of AI use cases to collaboratively build consensus. Finally, we shed light on how future FAccT researchers and practitioners can better incorporate diverse perspectives from lay people to better develop AI that aligns with public expectations and needs.