🤖 AI Summary
Existing multi-agent learning methods require repeated full-system execution during inference, hindering the development of individual agent problem-solving autonomy—unlike humans, who iteratively enhance reasoning through interactive engagement. This paper proposes a collaborative multi-agent learning framework explicitly designed to improve the independent reasoning capabilities of large language models (LLMs). It introduces two core mechanisms: Dynamic Interaction, which adaptively selects cooperative or competitive strategies, and Perception Calibration, which refines agents’ internal representations. Further, it adopts the Idea³ interaction paradigm—comprising idea sharing, joint analysis, and synthesis—to foster deep cognitive collaboration. Finally, it integrates Group Relative Policy Optimization (GRPO) with reward distribution transfer to jointly optimize intra-group consistency and drive individual capability leaps. Evaluated across three LLMs and six mathematical and programming tasks, our approach achieves up to a 5% absolute improvement over single-agent baselines, significantly enhancing both autonomous problem-solving performance and cross-task robustness.
📝 Abstract
Existing multi-agent learning approaches have developed interactive training environments to explicitly promote collaboration among multiple Large Language Models (LLMs), thereby constructing stronger multi-agent systems (MAS). However, during inference, they require re-executing the MAS to obtain final solutions, which diverges from human cognition that individuals can enhance their reasoning capabilities through interactions with others and resolve questions independently in the future. To investigate whether multi-agent interaction can enhance LLMs' independent problem-solving ability, we introduce ILR, a novel co-learning framework for MAS that integrates two key components: Dynamic Interaction and Perception Calibration. Specifically, Dynamic Interaction first adaptively selects either cooperative or competitive strategies depending on question difficulty and model ability. LLMs then exchange information through Idea3 (Idea Sharing, Idea Analysis, and Idea Fusion), an innovative interaction paradigm designed to mimic human discussion, before deriving their respective final answers. In Perception Calibration, ILR employs Group Relative Policy Optimization (GRPO) to train LLMs while integrating one LLM's reward distribution characteristics into another's reward function, thereby enhancing the cohesion of multi-agent interactions. We validate ILR on three LLMs across two model families of varying scales, evaluating performance on five mathematical benchmarks and one coding benchmark. Experimental results show that ILR consistently outperforms single-agent learning, yielding an improvement of up to 5% over the strongest baseline. We further discover that Idea3 can enhance the robustness of stronger LLMs during multi-agent inference, and dynamic interaction types can boost multi-agent learning compared to pure cooperative or competitive strategies.