🤖 AI Summary
This study addresses how the current growth-oriented economic paradigm drives artificial intelligence development in ways that may exacerbate social inequality, ecological degradation, and existential risks, rooted in a systemic misalignment between AI objectives and human well-being or ecological sustainability. It pioneers an expansion of the AI alignment problem into the economic paradigm dimension by integrating post-growth economics theory. The work proposes replacing unbounded optimization with satisficing strategies, anchoring AI development within socio-ecological boundaries—such as those defined by the Doughnut model—to constrain rebound effects. Furthermore, it advocates governing AI as a public good. This approach offers a systematic policy pathway toward instrumental AI systems that enhance human autonomy and establishes a novel economic foundation for the sustainable development of artificial general intelligence.
📝 Abstract
Artificial intelligence (AI) is advancing exponentially and is likely to have profound impacts on human wellbeing, social equity, and environmental sustainability. Here we argue that the "alignment problem" in AI research is also an economic alignment problem, as developing advanced AI inside a growth-based system is likely to increase social, environmental, and existential risks. We show that post-growth research offers concepts and policies that could substantially reduce AI risks, such as by replacing optimisation with satisficing, using the Doughnut of social and planetary boundaries to guide development, and curbing systemic rebound with resource caps. We propose governance and business reforms that treat AI as a commons and prioritise tool-like autonomy-enhancing systems over agentic AI. Finally, we argue that the development of artificial general intelligence (AGI) may require a new economics, for which post-growth scholarship provides a strong foundation.