🤖 AI Summary
Current AI systems predominantly rely on a single reasoning paradigm, and alignment mechanisms—such as reinforcement learning from human feedback (RLHF)—remain confined to the individual level, limiting their capacity to support complex tasks and large-scale human-AI collaboration. This work proposes that intelligence is inherently pluralistic, social, and relational, and introduces an internal “society of mind” to enable cognitive deliberation through debate. It further integrates humans and AI into hybrid “centaur” agents and designs digital protocols inspired by organizations and markets to elevate alignment from the individual to the institutional level. Experimental results demonstrate that this approach substantially enhances performance on complex problem-solving tasks and provides a scalable, trustworthy socio-technical infrastructure for collective intelligence.
📝 Abstract
The "AI singularity" is often miscast as a monolithic, godlike mind. Evolution suggests a different path: intelligence is fundamentally plural, social, and relational. Recent advances in agentic AI reveal that frontier reasoning models, such as DeepSeek-R1, do not improve simply by "thinking longer". Instead, they simulate internal "societies of thought," spontaneous cognitive debates that argue, verify, and reconcile to solve complex tasks. Moreover, we are entering an era of human-AI centaurs: hybrid actors where collective agency transcends individual control. Scaling this intelligence requires shifting from dyadic alignment (RLHF) toward institutional alignment. By designing digital protocols, modeled on organizations and markets, we can build a social infrastructure of checks and balances. The next intelligence explosion will not be a single silicon brain, but a complex, combinatorial society specializing and sprawling like a city. No mind is an island.