🤖 AI Summary
This study addresses the growing tension between Artificial Intelligence Safety (AIS) and Artificial Intelligence Ethics (AIE), which impedes collaborative governance of responsible AI. It presents the first systematic modeling of their interaction patterns and proposes a “critical bridging” approach centered on integration strategies anchored in shared concerns. Leveraging computational text analysis of 3,550 scholarly papers combined with expert annotation, the research constructs a reproducible knowledge map that empirically reveals AIE’s emphasis on real-world injustices and concrete harms, while AIS prioritizes proactive risk mitigation. Crucially, the analysis identifies significant convergence between the two fields in transparency, reproducibility, and governance mechanisms, thereby offering an evidence-based foundation and an innovative framework for cross-domain collaboration.
📝 Abstract
Tensions between AI Safety (AIS) and AI Ethics (AIE) have increasingly surfaced in AI governance and public debates about AI, leading to what we term the "responsible AI divides". We introduce a model that categorizes four modes of engagement with the tensions: radical confrontation, disengagement, compartmentalized coexistence, and critical bridging. We then investigate how critical bridging, with a particular focus on bridging problems, offers one of the most viable constructive paths for advancing responsible AI. Using computational tools to analyze a curated dataset of 3,550 papers, we map the research landscapes of AIE and AIS to identify both distinct and overlapping problems. Our findings point to both thematic divides and overlaps. For example, we find that AIE has long grappled with overcoming injustice and tangible AI harms, whereas AIS has primarily embodied an anticipatory approach focused on the mitigation of risks from AI capabilities. At the same time, we find significant overlap in core research concerns across both AIE and AIS around transparency, reproducibility, and inadequate governance mechanisms. As AIE and AIS continue to evolve, we recommend focusing on bridging problems as a constructive path forward for enhancing collaborative AI governance. We offer a series of recommendations to integrate shared considerations into a collaborative approach to responsible AI. Alongside our proposal, we highlight its limitations and explore open problems for future research. All data including the fully annotated dataset of papers with code to reproduce our figures can be found at: https://github.com/gyevnarb/ai-safety-ethics.