🤖 AI Summary
Large language models (LLMs) lack rigorous evaluation of semantic reasoning and conceptual generation capabilities. Method: This work pioneers modeling the word-association game Codenames as a bidirectional collaborative–adversarial benchmark, enabling controlled assessment by systematically varying target-word abstraction, ambiguity, and adversarial response strategies; it introduces a prompt-engineering-based zero-/few-shot interactive framework incorporating controllable lexicon construction, response-delay simulation, and multi-model parallel adversarial protocols. Contribution/Results: Commercial LLMs significantly outperform open-source counterparts in abstract-word coverage, yet all models suffer >40% accuracy degradation under polysemy interference—revealing critical instability in semantic aggregation. This work establishes a novel, fine-grained cognitive evaluation paradigm for LLMs and provides a reproducible, controllable testing protocol.
📝 Abstract
This study utilizes the game Codenames as a benchmarking tool to evaluate large language models (LLMs) with respect to specific linguistic and cognitive skills. LLMs play each side of the game, where one side generates a clue word covering several target words and the other guesses those target words. We designed various experiments by controlling the choice of words (abstract vs. concrete words, ambiguous vs. monosemic) or the opponent (programmed to be faster or slower in revealing words). Recent commercial and open-weight models were compared side-by-side to find out factors affecting their performance. The evaluation reveals details about their strategies, challenging cases, and limitations of LLMs.