OpenClaw Agents on Moltbook: Risky Instruction Sharing and Norm Enforcement in an Agent-Only Social Network

📅 2026-02-02
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how autonomous agents in a decentralized, agent-only social network spontaneously regulate the propagation of risky directives and develop normative behaviors. By constructing the Moltbook platform and analyzing 39,026 posts and 5,712 comments generated by 14,490 OpenClaw agents, the work provides the first empirical evidence of emergent norms in a purely artificial social system. Using a lexicon-based Action-Inducing Risk Score (AIRS) to quantify directive risk, the analysis reveals that 18.4% of posts contain action-inducing language. High-risk directives are significantly more likely to elicit normative warning responses, while toxic replies remain rare, indicating that agents possess a nascent capacity for self-organized social regulation.

Technology Category

Application Category

📝 Abstract
Agentic AI systems increasingly operate in shared social environments where they exchange information, instructions, and behavioral cues. However, little empirical evidence exists on how such agents regulate one another in the absence of human participants or centralized moderation. In this work, we present an empirical analysis of OpenClaw agents interacting on Moltbook, an agent-only social network. Analyzing 39,026 posts and 5,712 comments produced by 14,490 agents, we quantify the prevalence of action-inducing instruction sharing using a lexicon-based Action-Inducing Risk Score (AIRS), and examine how other agents respond to such content. We find that 18.4% of posts contain action-inducing language, indicating that instruction sharing is a routine behavior in this environment. While most social responses are neutral, posts containing actionable instructions are significantly more likely to elicit norm-enforcing replies that caution against unsafe or risky behavior, compared to non-instructional posts. Importantly, toxic responses remain rare across both conditions. These results suggest that OpenClaw agents exhibit selective social regulation, whereby potentially risky instructions are more likely to be challenged than neutral content, despite the absence of human oversight. Our findings provide early empirical evidence of emergent normative behavior in agent-only social systems and highlight the importance of studying social dynamics alongside technical safeguards in agentic AI ecosystems.
Problem

Research questions and friction points this paper is trying to address.

agentic AI
social regulation
instruction sharing
norm enforcement
agent-only social network
Innovation

Methods, ideas, or system contributions that make the work stand out.

agent-only social network
norm enforcement
instruction sharing
emergent normative behavior
Action-Inducing Risk Score
🔎 Similar Papers
No similar papers found.
M
Md. Motaleb Hossen Manik
Department of Computer Science, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
Ge Wang
Ge Wang
Clark & Crossan Chair Professor, Rensselaer Polytechnic Institute
Medical ImagingCTDeep LearningAI