Probe by Gaming: A Game-based Benchmark for Assessing Conceptual Knowledge in LLMs

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks predominantly focus on factual recall, failing to assess large language models’ (LLMs) dynamic understanding of conceptual boundaries—i.e., the shared and distinguishing semantic features among concepts. Method: We propose CK-Arena, the first multi-agent interactive evaluation benchmark grounded in the Undercover mechanism. It systematically probes LLMs’ abilities to describe, differentiate, and infer conceptual boundaries through collaborative/competitive frameworks, deliberate concept confusion, progressive information revelation, and scalable prompting strategies. Contribution/Results: Empirical evaluation reveals that LLMs’ conceptual understanding exhibits strong category dependence and weak correlation with parameter count or general-purpose performance metrics. CK-Arena transcends conventional static, isolated evaluation paradigms, offering a novel, interaction-driven diagnostic tool for conceptual knowledge assessment and delivering empirical evidence for fine-grained concept-level analysis.

Technology Category

Application Category

📝 Abstract
Concepts represent generalized abstractions that enable humans to categorize and reason efficiently, yet it is unclear to what extent Large Language Models (LLMs) comprehend these semantic relationships. Existing benchmarks typically focus on factual recall and isolated tasks, failing to evaluate the ability of LLMs to understand conceptual boundaries. To address this gap, we introduce CK-Arena, a multi-agent interaction game built upon the Undercover game, designed to evaluate the capacity of LLMs to reason with concepts in interactive settings. CK-Arena challenges models to describe, differentiate, and infer conceptual boundaries based on partial information, encouraging models to explore commonalities and distinctions between closely related concepts. By simulating real-world interaction, CK-Arena provides a scalable and realistic benchmark for assessing conceptual reasoning in dynamic environments. Experimental results show that LLMs' understanding of conceptual knowledge varies significantly across different categories and is not strictly aligned with parameter size or general model capabilities. The data and code are available at the project homepage: https://ck-arena.site.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' comprehension of conceptual semantic relationships
Evaluating LLMs' ability to understand conceptual boundaries
Measuring LLMs' conceptual reasoning in dynamic interactive settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Game-based benchmark for LLM conceptual knowledge
Multi-agent interaction to evaluate conceptual reasoning
Scalable dynamic environment for realistic assessment
🔎 Similar Papers
No similar papers found.