🤖 AI Summary
This study addresses the responsible deployment of conversational AI for social good (CAI4SG), tackling core challenges such as algorithmic bias, data privacy, and socio-technical risks. It proposes a role-centered analytical framework that classifies conversational agents based on their autonomy and affective engagement, drawing on interdisciplinary perspectives from human-computer interaction, AI ethics, and social computing. The framework is applied across critical domains including mental health support and accessibility assistance. Through a systematic review, the research identifies distinct ethical and technical issues associated with each agent role, offering theoretical guidance for the design and implementation of CAI4SG systems. The findings aim to advance the development of conversational AI that is more equitable, effective, and trustworthy in socially beneficial applications.
📝 Abstract
The integration of Conversational Agents (CAs) into daily life offers opportunities to tackle global challenges, leading to the emergence of Conversational AI for Social Good (CAI4SG). This paper examines the advancements of CAI4SG using a role-based framework that categorizes systems according to their AI autonomy and emotional engagement. This framework emphasizes the importance of considering the role of CAs in social good contexts, such as serving as empathetic supporters in mental health or functioning as assistants for accessibility. Additionally, exploring the deployment of CAs in various roles raises unique challenges, including algorithmic bias, data privacy, and potential socio-technical harms. These issues can differ based on the CA's role and level of engagement. This paper provides an overview of the current landscape, offering a role-based understanding that can guide future research and design aimed at the equitable, ethical, and effective development of CAI4SG.