Beyond Blind Spots: Analytic Hints for Mitigating LLM-Based Evaluation Pitfalls

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) employed as code evaluators—termed LLM-as-Judges (LaaJ)—exhibit poor reliability in detecting domain-critical errors, particularly in COBOL legacy system modernization. Method: We propose a parsing-based prompt enhancement mechanism featuring (i) a lightweight, expert-knowledge-driven parser that dynamically generates injectable, domain-specific analytical prompts to enable synergistic evaluation between LLMs and rule-based analysis; and (ii) a domain-informed error taxonomy coupled with a multi-LaaJ ensemble framework. Contribution/Results: Our approach elevates the average error detection rate of LaaJ from 45% to 94%, substantially improving interpretability and evaluation robustness. All components—including the benchmark dataset, prompt templates, and implementation framework—are fully open-sourced.

Technology Category

Application Category

📝 Abstract
Large Language Models are increasingly deployed as judges (LaaJ) in code generation pipelines. While attractive for scalability, LaaJs tend to overlook domain specific issues raising concerns about their reliability in critical evaluation tasks. To better understand these limitations in practice, we examine LaaJ behavior in a concrete industrial use case: legacy code modernization via COBOL code generation. In this setting, we find that even production deployed LaaJs can miss domain critical errors, revealing consistent blind spots in their evaluation capabilities. To better understand these blind spots, we analyze generated COBOL programs and associated LaaJs judgments, drawing on expert knowledge to construct a preliminary taxonomy. Based on this taxonomy, we develop a lightweight analytic checker tool that flags over 30 domain specific issues observed in practice. We use its outputs as analytic hints, dynamically injecting them into the judges prompt to encourage LaaJ to revisit aspects it may have overlooked. Experiments on a test set of 100 programs using four production level LaaJs show that LaaJ alone detects only about 45% of the errors present in the code (in all judges we tested), while the analytic checker alone lacks explanatory depth. When combined, the LaaJ+Hints configuration achieves up to 94% coverage (for the best performing judge and injection prompt) and produces qualitatively richer, more accurate explanations, demonstrating that analytic-LLM hybrids can substantially enhance evaluation reliability in deployed pipelines. We release the dataset and all used prompts.
Problem

Research questions and friction points this paper is trying to address.

Mitigating LLM blind spots in code evaluation
Enhancing reliability of LLM-based code judges
Combining analytic hints with LLM judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight analytic checker flags domain-specific issues
Dynamic injection of analytic hints into LLM judge prompts
Hybrid analytic-LLM configuration enhances evaluation reliability
🔎 Similar Papers
No similar papers found.
Ora Nova Fandina
Ora Nova Fandina
IBM Research
NLPLanguage ModelsMetric EmbeddingApproximationTheory
Eitan Farchi
Eitan Farchi
IBM Research Lab in Haifa
test optimizationreviewsconcurrency
Shmulik Froimovich
Shmulik Froimovich
Unknown affiliation
R
Raviv Gal
IBM Research, Israel
W
Wesam Ibraheem
IBM Research, Israel
R
Rami Katan
IBM Research, Israel
A
Alice Podolsky
IBM Research, Israel