🤖 AI Summary
Large language models (LLMs) exhibit limited individual capability on complex reasoning tasks. Method: This paper proposes a novel “agentified organization” paradigm that intrinsically embeds organizational structure into the model’s reasoning process. We introduce AsyncThink—a protocol for asynchronous multi-agent collaboration—enabling dynamic subquery allocation, intermediate knowledge fusion, and reinforcement learning–driven optimization of collaboration topology. Crucially, the framework generalizes to unseen tasks without fine-tuning. Contribution/Results: On mathematical reasoning benchmarks, our approach improves accuracy while reducing average inference latency by 28% compared to serial and static parallel baselines. These results empirically validate the efficacy and scalability of organizationally structured collaborative reasoning, establishing a new direction for enhancing LLM reasoning through emergent, adaptive agent coordination.
📝 Abstract
We envision a new era of AI, termed agentic organization, where agents solve complex problems by working collaboratively and concurrently, enabling outcomes beyond individual intelligence. To realize this vision, we introduce asynchronous thinking (AsyncThink) as a new paradigm of reasoning with large language models, which organizes the internal thinking process into concurrently executable structures. Specifically, we propose a thinking protocol where an organizer dynamically assigns sub-queries to workers, merges intermediate knowledge, and produces coherent solutions. More importantly, the thinking structure in this protocol can be further optimized through reinforcement learning. Experiments demonstrate that AsyncThink achieves 28% lower inference latency compared to parallel thinking while improving accuracy on mathematical reasoning. Moreover, AsyncThink generalizes its learned asynchronous thinking capabilities, effectively tackling unseen tasks without additional training.