🤖 AI Summary
Large language models struggle to develop robust general reasoning capabilities due to the absence of training grounded in authentic multi-agent interactions. To address this limitation, this work proposes the Multi-Agent Reward Optimization (MARO) framework, which constructs a simulated social interaction environment to decompose sparse global outcome signals into fine-grained, behavior-level rewards. MARO further introduces role-aware sample weighting and a direct behavioral utility evaluation mechanism to effectively mitigate issues arising from imbalanced role distributions and environmental instability. Experimental results demonstrate that the proposed approach significantly enhances model performance on social reasoning tasks, and the acquired capabilities generalize effectively to downstream tasks such as mathematical reasoning and instruction following, thereby validating the role of social learning in fostering general reasoning abilities.
📝 Abstract
Humans face countless scenarios that require reasoning and judgment in daily life. However, existing large language model training methods primarily allow models to learn from existing textual content or solve predetermined problems, lacking experience in real scenarios involving interaction, negotiation, and competition with others. To address this, this paper proposes Multi-Agent Reward Optimization (MARO), a method that enables large language models (LLMs) to acquire stronger reasoning abilities by learning and practicing in multi-agent social environments. Specifically, MARO first addresses the sparse learning signal problem by decomposing final success or failure outcomes into each specific behavior during the interaction process; second, it handles the uneven role distribution problem by balancing the training sample weights of different roles; finally, it addresses environmental instability issues by directly evaluating the utility of each behavior. Experimental results demonstrate that MARO not only achieves significant improvements in social reasoning capabilities, but also that the abilities acquired through social simulation learning can effectively transfer to other tasks such as mathematical reasoning and instruction following. This reveals the tremendous potential of multi-agent social learning in enhancing the general reasoning capabilities of LLMs.