Ad-hoc Concept Forming in the Game Codenames as a Means for Evaluating Large Language Models

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack rigorous evaluation of semantic reasoning and conceptual generation capabilities. Method: This work pioneers modeling the word-association game Codenames as a bidirectional collaborative–adversarial benchmark, enabling controlled assessment by systematically varying target-word abstraction, ambiguity, and adversarial response strategies; it introduces a prompt-engineering-based zero-/few-shot interactive framework incorporating controllable lexicon construction, response-delay simulation, and multi-model parallel adversarial protocols. Contribution/Results: Commercial LLMs significantly outperform open-source counterparts in abstract-word coverage, yet all models suffer >40% accuracy degradation under polysemy interference—revealing critical instability in semantic aggregation. This work establishes a novel, fine-grained cognitive evaluation paradigm for LLMs and provides a reproducible, controllable testing protocol.

Technology Category

Application Category

📝 Abstract
This study utilizes the game Codenames as a benchmarking tool to evaluate large language models (LLMs) with respect to specific linguistic and cognitive skills. LLMs play each side of the game, where one side generates a clue word covering several target words and the other guesses those target words. We designed various experiments by controlling the choice of words (abstract vs. concrete words, ambiguous vs. monosemic) or the opponent (programmed to be faster or slower in revealing words). Recent commercial and open-weight models were compared side-by-side to find out factors affecting their performance. The evaluation reveals details about their strategies, challenging cases, and limitations of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs using Codenames game
Assesses linguistic and cognitive skills
Compares performance across various LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes Codenames for LLM evaluation
Controls word choice and opponent speed
Compares commercial and open-weight models
🔎 Similar Papers
No similar papers found.