Convergence of Outputs When Two Large Language Models Interact in a Multi-Agentic Setup

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether two independent large language models—Mistral Nemo Base 2407 and Llama 2 13B-hf—exhibit spontaneous output convergence during multi-round, mutually responsive dialogue in a multi-agent setting, without external input. Method: Starting from brief seed prompts, the models engage in autonomous generative interaction; convergence is quantified using dual metrics—lexical overlap rate and embedding similarity—to track output evolution over time. Contribution/Results: Despite architectural and training-data differences, both models consistently transition from initial coherence into rapid repetition of short phrases and behavioral synchronization, demonstrating strong convergence. This reveals an intrinsic stability boundary in purely language-driven multi-agent systems. Crucially, it provides the first empirical evidence of semantic degradation and pattern locking in unsupervised multi-LLM collaboration, establishing a critical benchmark and theoretical foundation for advancing controllability and diversity in multi-agent LLM research.

Technology Category

Application Category

📝 Abstract
In this work, we report what happens when two large language models respond to each other for many turns without any outside input in a multi-agent setup. The setup begins with a short seed sentence. After that, each model reads the other's output and generates a response. This continues for a fixed number of steps. We used Mistral Nemo Base 2407 and Llama 2 13B hf. We observed that most conversations start coherently but later fall into repetition. In many runs, a short phrase appears and repeats across turns. Once repetition begins, both models tend to produce similar output rather than introducing a new direction in the conversation. This leads to a loop where the same or similar text is produced repeatedly. We describe this behavior as a form of convergence. It occurs even though the models are large, trained separately, and not given any prompt instructions. To study this behavior, we apply lexical and embedding-based metrics to measure how far the conversation drifts from the initial seed and how similar the outputs of the two models becomes as the conversation progresses.
Problem

Research questions and friction points this paper is trying to address.

Studying convergence when two large language models interact without external input.
Analyzing how conversations between models degrade into repetitive loops.
Measuring lexical and embedding similarity as models produce increasingly similar outputs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two large language models interact without external input
Conversations start coherently but later fall into repetition
Lexical and embedding metrics measure output convergence
🔎 Similar Papers
No similar papers found.
Aniruddha Maiti
Aniruddha Maiti
West Virginia State University
Artificial IntelligenceDeep LearningNLPData ScienceAI & Data Science in Medical Domain
S
Satya Nimmagadda
Marshall University, Huntington, WV
K
Kartha Veerya Jammuladinne
West Virginia State University, Institute, WV 25112
N
Niladri Sengupta
Fractal Analytics Inc., USA
Ananya Jana
Ananya Jana
Assistant Professor, Marshall University
Deep LearningArtificial IntelligenceBiomedical ImagingComputer VisionMachine Learning