InFerActive: Towards Scalable Human Evaluation of Large Language Models through Interactive Inference

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM human evaluation is hampered by the exponential growth of model responses, rendering manual assessment infeasible at scale. To address this, we propose TreeEval—a novel interactive evaluation system designed for exponentially large response trees generated via iterative sampling. Its core innovations include a probability-driven dynamic pruning mechanism that selectively eliminates low-probability subtrees, and a semantic-aware adaptive text visualization technique that bridges the gap between token-level model outputs and human semantic interpretation. TreeEval enables evaluators to navigate high-probability subtrees on demand, focus on critical divergence paths, and perform fine-grained behavioral diagnostics. A user study (N=12) demonstrates significant improvements in evaluation efficiency. Expert case studies further validate TreeEval’s effectiveness in analyzing complex model behaviors and advancing evaluation paradigms beyond static, single-response assessment.

Technology Category

Application Category

📝 Abstract
Human evaluation remains the gold standard for evaluating outputs of Large Language Models (LLMs). The current evaluation paradigm reviews numerous individual responses, leading to significant scalability challenges. LLM outputs can be more efficiently represented as a tree structure, reflecting their autoregressive generation process and stochastic token selection. However, conventional tree visualization cannot scale to the exponentially large trees generated by modern sampling methods of LLMs. To address this problem, we present InFerActive, an interactive inference system for scalable human evaluation. InFerActive enables on-demand exploration through probability-based filtering and evaluation features, while bridging the semantic gap between computational tokens and human-readable text through adaptive visualization techniques. Through a technical evaluation and user study (N=12), we demonstrate that InFerActive significantly improves evaluation efficiency and enables more comprehensive assessment of model behavior. We further conduct expert case studies that demonstrate InFerActive's practical applicability and potential for transforming LLM evaluation workflows.
Problem

Research questions and friction points this paper is trying to address.

Addresses scalability challenges in human evaluation of LLM outputs
Enables interactive exploration of large LLM-generated tree structures
Bridges semantic gap between computational tokens and human-readable text
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive inference system for scalable human evaluation
Probability-based filtering and on-demand exploration features
Adaptive visualization bridging tokens and human-readable text
🔎 Similar Papers
No similar papers found.
J
Junhyeong Hwangbo
Department of Computer Science and Engineering, Seoul National University, Seoul, Republic of Korea
S
Soohyun Lee
Department of Computer Science and Engineering, Seoul National University, Seoul, Republic of Korea
M
Minsoo Cheong
Department of Computer Science and Engineering, Seoul National University, Seoul, Republic of Korea
Hyeon Jeon
Hyeon Jeon
Ph.D. Student, Seoul National University
Visual AnalyticsHigh-dimensional DataVisual Perception
Jinwook Seo
Jinwook Seo
Department of Computer Science and Engineering, Seoul National University
Human-Computer InteractionInformation VisualizationVisual AnalyticsExplainable AIBiomedical/Health Informatics