CORE: Measuring Multi-Agent LLM Interaction Quality under Game-Theoretic Pressures

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of quantifiable evaluation metrics for language interaction quality—particularly lexical diversity—of large language models (LLMs) in multi-agent systems under game-theoretic pressure. We propose CORE, a novel composite metric integrating cluster entropy, lexical repetition rate, and semantic similarity. CORE is the first to jointly incorporate Zipf’s law and Heaps’ law into multi-agent language analysis, enabling characterization of dynamic differences in word-frequency distributions and vocabulary growth across cooperative versus competitive settings. Experimental results reveal that cooperative scenarios yield lexically rich yet highly repetitive utterances, whereas competitive scenarios exhibit constrained vocabulary and sluggish lexical evolution. CORE effectively diagnoses LLMs’ linguistic robustness and adaptability under strategic pressure, offering an interpretable, reproducible evaluation framework for modeling multi-agent language behavior.

Technology Category

Application Category

📝 Abstract
Game-theoretic interactions between agents with Large Language Models (LLMs) have revealed many emergent capabilities, yet the linguistic diversity of these interactions has not been sufficiently quantified. In this paper, we present the Conversational Robustness Evaluation Score: CORE, a metric to quantify the effectiveness of language use within multi-agent systems across different game-theoretic interactions. CORE integrates measures of cluster entropy, lexical repetition, and semantic similarity, providing a direct lens of dialog quality. We apply CORE to pairwise LLM dialogs across competitive, cooperative, and neutral settings, further grounding our analysis in Zipf's and Heaps' Laws to characterize word frequency distributions and vocabulary growth. Our findings show that cooperative settings exhibit both steeper Zipf distributions and higher Heap exponents, indicating more repetition alongside greater vocabulary expansion. In contrast, competitive interactions display lower Zipf and Heaps exponents, reflecting less repetition and more constrained vocabularies. These results provide new insights into how social incentives influence language adaptation, and highlight CORE as a robust diagnostic for measuring linguistic robustness in multi-agent LLM systems. Our code is available at https://github.com/psyonp/core.
Problem

Research questions and friction points this paper is trying to address.

Quantify linguistic diversity in multi-agent LLM game interactions
Measure dialog quality across competitive cooperative neutral settings
Evaluate language adaptation under social incentive pressures
Innovation

Methods, ideas, or system contributions that make the work stand out.

CORE metric evaluates multi-agent dialog quality
Integrates cluster entropy, lexical repetition, semantic similarity
Applies to competitive, cooperative, neutral game settings
🔎 Similar Papers
No similar papers found.