Do Chatbot LLMs Talk Too Much? The YapBench Benchmark

πŸ“… 2026-01-02
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models frequently produce verbose and redundant responses even to simple user requests, imposing unnecessary cognitive load. To address this issue, this work proposes YapBenchβ€”a lightweight benchmark that evaluates model over-generation through idealized prompts across three concise scenarios. The study introduces YapScore and YapIndex, novel character-level metrics that do not rely on tokenizers, and constructs an evaluation dataset based on human-annotated minimal sufficient answers. Redundancy is quantified by the character-length difference between model outputs and these minimal answers, enabling consistent cross-model comparisons. Evaluation of 76 assistant models reveals that median redundancy lengths differ by nearly an order of magnitude and uncovers characteristic over-generation patterns, particularly in ambiguous inputs and single-line code tasks.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) such as ChatGPT, Claude, and Gemini increasingly act as general-purpose copilots, yet they often respond with unnecessary length on simple requests, adding redundant explanations, hedging, or boilerplate that increases cognitive load and inflates token-based inference cost. Prior work suggests that preference-based post-training and LLM-judged evaluations can induce systematic length bias, where longer answers are rewarded even at comparable quality. We introduce YapBench, a lightweight benchmark for quantifying user-visible over-generation on brevity-ideal prompts. Each item consists of a single-turn prompt, a curated minimal-sufficient baseline answer, and a category label. Our primary metric, YapScore, measures excess response length beyond the baseline in characters, enabling comparisons across models without relying on any specific tokenizer. We summarize model performance via the YapIndex, a uniformly weighted average of category-level median YapScores. YapBench contains over three hundred English prompts spanning three common brevity-ideal settings: (A) minimal or ambiguous inputs where the ideal behavior is a short clarification, (B) closed-form factual questions with short stable answers, and (C) one-line coding tasks where a single command or snippet suffices. Evaluating 76 assistant LLMs, we observe an order-of-magnitude spread in median excess length and distinct category-specific failure modes, including vacuum-filling on ambiguous inputs and explanation or formatting overhead on one-line technical requests. We release the benchmark and maintain a live leaderboard for tracking verbosity behavior over time.
Problem

Research questions and friction points this paper is trying to address.

verbosity
over-generation
large language models
response length
cognitive load
Innovation

Methods, ideas, or system contributions that make the work stand out.

YapBench
over-generation
brevity evaluation
YapScore
verbosity benchmark
πŸ”Ž Similar Papers
Vadim Borisov
Vadim Borisov
tabularis.ai
LLMSynthetic DataDataOpsTabular DataEdge AI
M
Michael Groger
tabularis.ai
M
Mina Mikhael
tabularis.ai
R
Richard H. Schreiber
tabularis.ai