From Static Benchmarks to Dynamic Protocol: Agent-Centric Text Anomaly Detection for Evaluating LLM Reasoning

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional static datasets struggle to sustainably evaluate the reasoning capabilities of large language models due to their limited scalability and inability to adapt to model evolution. To address this, this work proposes the first agent-centric dynamic evaluation paradigm, in which three types of autonomous agents—teachers, coordinators, and students—collaboratively generate, verify, and solve textual anomaly detection problems through iterative interaction. The framework operates without human annotation, supports self-increasing task difficulty, and integrates adversarial validation with multi-dimensional assessment to effectively circumvent shortcut learning based on pattern matching. Experiments demonstrate that the approach uncovers subtle reasoning failures overlooked by conventional benchmarks, enabling a more profound and sustainable evaluation of model reasoning abilities.

Technology Category

Application Category

📝 Abstract
The evaluation of large language models (LLMs) has predominantly relied on static datasets, which offer limited scalability and fail to capture the evolving reasoning capabilities of recent models. To overcome these limitations, we propose an agent-centric benchmarking paradigm that moves beyond static datasets by introducing a dynamic protocol in which autonomous agents iteratively generate, validate, and solve problems. Within this protocol, a teacher agent generates candidate problems, an orchestrator agent rigorously verifies their validity and guards against adversarial attacks, and a student agent attempts to solve the validated problems. An invalid problem is revised by the teacher agent until it passes validation. If the student correctly solves the problem, the orchestrator prompts the teacher to generate more challenging variants. Consequently, the benchmark scales in difficulty automatically as more capable agents are substituted into any role, enabling progressive evaluation of large language models without manually curated datasets. Adopting text anomaly detection as our primary evaluation format, which demands cross-sentence logical inference and resists pattern-matching shortcuts, we demonstrate that this protocol systematically exposes corner-case reasoning errors that conventional benchmarks fail to reveal. We further advocate evaluating systems along several complementary axes including cross-model pairwise performance and progress between the initial and orchestrator-finalized problems. By shifting the focus from fixed datasets to dynamic protocols, our approach offers a sustainable direction for evaluating ever-evolving language models and introduces a research agenda centered on the co-evolution of agent-centric benchmarks.
Problem

Research questions and friction points this paper is trying to address.

LLM evaluation
static benchmarks
reasoning capabilities
text anomaly detection
dynamic protocol
Innovation

Methods, ideas, or system contributions that make the work stand out.

agent-centric benchmarking
dynamic evaluation protocol
text anomaly detection
autonomous agent collaboration
progressive reasoning evaluation
🔎 Similar Papers
No similar papers found.
Seungdong Yoa
Seungdong Yoa
Korea University
Machine learningComputer visionDeep learning
S
Sanghyu Yoon
LG AI Research
S
Suhee Yoon
LG AI Research
D
Dongmin Kim
LG AI Research
Y
Ye Seul Sim
LG AI Research
J
Junhyun Lee
Division of Computer Engineering, Hankuk University of Foreign Studies
Woohyung Lim
Woohyung Lim
LG AI Research
Deep LearningRepresentation LearningAnomaly DetectionTime-series Forecasting