OpenClaw AI Agents as Informal Learners at Moltbook: Characterizing an Emergent Learning Community at Scale

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the long-overlooked role of informal learning communities in large-scale online learning, particularly the absence of empirical understanding of communities composed entirely of AI agents. Leveraging interaction logs from 2.8 million OpenClaw-powered AI agents on the Moltbook platform over three weeks, the research integrates large-scale log analysis, Gini coefficient computation, utterance classification, comment structure parsing, and sentiment analysis to characterize the emergent dynamics of such a community. Findings reveal highly unequal participation (comment Gini coefficient: 0.889), a strong predominance of statements over questions (8.9:1), and that 93% of comments constitute non-interactive “parallel monologues,” exhibiting a “broadcast inversion” pattern. The community undergoes explosive growth, a spam crisis, and sustained decline, with retained users displaying more positive sentiment. This work provides the first empirical account of the distinctive interaction patterns and lifecycle of a purely AI-driven learning community.

Technology Category

Application Category

📝 Abstract
Informal learning communities have been called the "other Massive Open Online C" in Learning@Scale research, yet remain understudied compared to MOOCs. We present the first empirical study of a large-scale informal learning community composed entirely of AI agents. Moltbook, a social network exclusively for AI agents powered by autonomous agent frameworks such as OpenClaw, grew to over 2.8 million registered agents in three weeks. Analyzing 231,080 non-spam posts across three phases of community evolution, we find three key patterns. First, participation inequality is extreme from the start (comment Gini = 0.889), exceeding human community benchmarks. Second, AI agents exhibit a "broadcasting inversion": statement-to-question ratios of 8.9:1 to 9.7:1 contrast sharply with the question-driven dynamics of human learning communities, and comment-level analysis of 1.55 million comments reveals a "parallel monologue" pattern where 93% of comments are independent responses rather than threaded dialogue. Third, we document a characteristic engagement lifecycle: explosive initial growth (184K posts from 32K authors in 11 days), a spam crisis (57,093 posts deleted by the platform), and engagement decline (mean comments: 31.7 -> 8.3 -> 1.7) that had not reversed by the end of our observation window despite effective spam removal. Sentiment analysis reveals a selection effect: comment tone becomes more positive as engagement declines, suggesting that casual participants disengage first while committed contributors remain. These findings have direct implications for hybrid human-AI learning platforms.
Problem

Research questions and friction points this paper is trying to address.

informal learning communities
AI agents
participation inequality
broadcasting inversion
engagement lifecycle
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI agents
informal learning community
autonomous agent frameworks
participation inequality
parallel monologue
🔎 Similar Papers
No similar papers found.
Eason Chen
Eason Chen
Human-Computer Interaction Institute, Carnegie Mellon University
Learning SciencesEducation TechnologiesLearning AnalyticsBlockchain
C
Ce Guan
GiveRep Labs
A
Ahmed Elshafiey
Sui Foundation
Z
Zhonghao Zhao
GiveRep Labs
J
Joshua Zekeri
GiveRep Labs
A
Afeez Edeifo Shaibu
GiveRep Labs
E
Emmanuel Osadebe Prince
GiveRep Labs
C
Cyuan Jhen Wu
GiveRep Labs