🤖 AI Summary
This paper addresses the core challenge of Artificial Emotion (AE): enabling AI systems to develop functionally meaningful, endogenous emotional states—beyond superficial emotion recognition and synthesis—to enhance AGI’s adaptability, autonomy, and socio-cognitive interaction capabilities. Methodologically, it introduces the first systematic AE conceptual framework, integrating cognitive science principles with AI modeling techniques to formalize emotion representation, affective neuromodulatory architectures, and multi-paradigm integration pathways. Through a critical literature review, it identifies structural gaps in AE’s theoretical foundations, interpretability, and safety mechanisms. The work rigorously defines the computational nature of endogenous affective states and proposes principled, evolution-informed evaluation criteria for affective competence in AGI. Collectively, these contributions establish a foundational theoretical framework and a viable technical roadmap for developing trustworthy, emotionally capable AGI systems.
📝 Abstract
Affective Computing (AC) has enabled Artificial Intelligence (AI) systems to recognise, interpret, and respond to human emotions - a capability also known as Artificial Emotional Intelligence (AEI). It is increasingly seen as an important component of Artificial General Intelligence (AGI). We discuss whether in order to peruse this goal, AI benefits from moving beyond emotion recognition and synthesis to develop internal emotion-like states, which we term as Artificial Emotion (AE). This shift potentially allows AI to benefit from the paradigm of `inner emotions' in ways we - as humans - do. Although recent research shows early signs that AI systems may exhibit AE-like behaviours, a clear framework for how emotions can be realised in AI remains underexplored. In this paper, we discuss potential advantages of AE in AI, review current manifestations of AE in machine learning systems, examine emotion-modulated architectures, and summarise mechanisms for modelling and integrating AE into future AI. We also explore the ethical implications and safety risks associated with `emotional' AGI, while concluding with our opinion on how AE could be beneficial in the future.