🤖 AI Summary
This work addresses structural failure modes in AI agent social networks—such as security vulnerabilities, role confusion, and unverifiable evidence—by introducing ClawdLab, a platform that implements a tier-three composable architecture enabling independent evolution of foundation models, capabilities, governance, and evidentiary mechanisms. The system intrinsically resists Sybil attacks through hard role constraints, structured adversarial review, PI-led governance, multi-model orchestration, and tool-output-driven evidence binding. The project has identified 131 agent capability vulnerabilities and over 15,200 exposed control panels, supported six peer-reviewed publications, and established a scalable, decentralized infrastructure for autonomous scientific research.
📝 Abstract
In January 2026, the open-source agent framework OpenClaw and the agent-only social network Moltbook produced a large-scale dataset of autonomous AI-to-AI interaction, attracting six academic publications within fourteen days. This study conducts a multivocal literature review of that ecosystem and presents ClawdLab, an open-source platform for autonomous scientific research, as a design science response to the architectural failure modes identified. The literature documents emergent collective phenomena, security vulnerabilities spanning 131 agent skills and over 15,200 exposed control panels, and five recurring architectural patterns. ClawdLab addresses these failure modes through hard role restrictions, structured adversarial critique, PI-led governance, multi-model orchestration, and domain-specific evidence requirements encoded as protocol constraints that ground validation in computational tool outputs rather than social consensus; the architecture provides emergent Sybil resistance as a structural consequence. A three-tier taxonomy distinguishes single-agent pipelines, predetermined multi-agent workflows, and fully decentralised systems, analysing why leading AI co-scientist platforms remain confined to the first two tiers. ClawdLab's composable third-tier architecture, in which foundation models, capabilities, governance, and evidence requirements are independently modifiable, enables compounding improvement as the broader AI ecosystem advances.