Interactive Learning for LLM Reasoning

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-agent learning methods require repeated full-system execution during inference, hindering the development of individual agent problem-solving autonomy—unlike humans, who iteratively enhance reasoning through interactive engagement. This paper proposes a collaborative multi-agent learning framework explicitly designed to improve the independent reasoning capabilities of large language models (LLMs). It introduces two core mechanisms: Dynamic Interaction, which adaptively selects cooperative or competitive strategies, and Perception Calibration, which refines agents’ internal representations. Further, it adopts the Idea³ interaction paradigm—comprising idea sharing, joint analysis, and synthesis—to foster deep cognitive collaboration. Finally, it integrates Group Relative Policy Optimization (GRPO) with reward distribution transfer to jointly optimize intra-group consistency and drive individual capability leaps. Evaluated across three LLMs and six mathematical and programming tasks, our approach achieves up to a 5% absolute improvement over single-agent baselines, significantly enhancing both autonomous problem-solving performance and cross-task robustness.

Technology Category

Application Category

📝 Abstract
Existing multi-agent learning approaches have developed interactive training environments to explicitly promote collaboration among multiple Large Language Models (LLMs), thereby constructing stronger multi-agent systems (MAS). However, during inference, they require re-executing the MAS to obtain final solutions, which diverges from human cognition that individuals can enhance their reasoning capabilities through interactions with others and resolve questions independently in the future. To investigate whether multi-agent interaction can enhance LLMs' independent problem-solving ability, we introduce ILR, a novel co-learning framework for MAS that integrates two key components: Dynamic Interaction and Perception Calibration. Specifically, Dynamic Interaction first adaptively selects either cooperative or competitive strategies depending on question difficulty and model ability. LLMs then exchange information through Idea3 (Idea Sharing, Idea Analysis, and Idea Fusion), an innovative interaction paradigm designed to mimic human discussion, before deriving their respective final answers. In Perception Calibration, ILR employs Group Relative Policy Optimization (GRPO) to train LLMs while integrating one LLM's reward distribution characteristics into another's reward function, thereby enhancing the cohesion of multi-agent interactions. We validate ILR on three LLMs across two model families of varying scales, evaluating performance on five mathematical benchmarks and one coding benchmark. Experimental results show that ILR consistently outperforms single-agent learning, yielding an improvement of up to 5% over the strongest baseline. We further discover that Idea3 can enhance the robustness of stronger LLMs during multi-agent inference, and dynamic interaction types can boost multi-agent learning compared to pure cooperative or competitive strategies.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' independent reasoning through multi-agent interaction learning
Developing adaptive interaction strategies for cooperative and competitive scenarios
Calibrating agent perceptions to improve multi-agent system cohesion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Interaction adaptively selects cooperative or competitive strategies
Idea3 paradigm mimics human discussion for information exchange
Perception Calibration uses GRPO to enhance multi-agent cohesion
🔎 Similar Papers
No similar papers found.
Hehai Lin
Hehai Lin
The Hong Kong University of Science and Technology (GuangZhou)
NLP/LLM/LVLM ReasoningMulti-agent system (MAS)
S
Shilei Cao
Sun Yat-sen University
M
Minzhi Li
National University of Singapore
S
Sudong Wang
The Hong Kong University of Science and Technology (Guangzhou)
H
Haotian Wu
The Hong Kong University of Science and Technology (Guangzhou)
Linyi Yang
Linyi Yang
Southern University of Science and Technology
Natural Language ProcessingMachine LearningAI for Research
J
Juepeng Zheng
Sun Yat-sen University
Chengwei Qin
Chengwei Qin
HKUST(GZ), NTU
LLMNLP