Time To Impeach LLM-as-a-Judge: Programs are the Future of Evaluation

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation methods suffer from high API costs, low reliability, inflexible workflows, and inherent biases. To address these limitations, we propose PAJAMA, the first framework to introduce the “Program-as-a-Judge” paradigm: leveraging LLMs to automatically synthesize executable Python judgment programs—replacing opaque, black-box scoring with transparent, interpretable, auditable, and reusable evaluation logic. Key technical contributions include LLM-driven program synthesis, rule distillation, local program execution, and bias calibration. Experiments demonstrate that PAJAMA improves inter-judge consistency by 15.83%, reduces biased responses by 23.7%, outperforms LLM-based judges by 2.19–8.67% on the CHAT-HARD benchmark, and slashes evaluation costs by three orders of magnitude.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are widely used to evaluate the quality of LLM generations and responses, but this leads to significant challenges: high API costs, uncertain reliability, inflexible pipelines, and inherent biases. To address these, we introduce PAJAMA (Program-As-a-Judge for Automated Model Assessment), a new alternative that uses LLMs to synthesize executable judging programs instead of directly scoring responses. These synthesized programs can be stored and run locally, costing orders of magnitude less while providing interpretable, and auditable judging logic that can be easily adapted. Program-based judges mitigate biases, improving judgment consistency by 15.83% and reducing biased responses by 23.7% on average compared to a Qwen2.5-14B-based LLM-as-a-judge. When program judgments are distilled into a model, PAJAMA outperforms LLM-as-a-judge on the challenging CHAT-HARD subset of RewardBench, outperforming metrics by 2.19% on Prometheus and 8.67% on the JudgeLM dataset, all at three orders of magnitude lower cost.
Problem

Research questions and friction points this paper is trying to address.

High API costs and uncertain reliability in LLM evaluations
Inflexible pipelines and inherent biases in LLM judging
Need for interpretable, auditable, and cost-effective evaluation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs synthesize executable judging programs
Local execution reduces costs significantly
Program-based judges reduce biases effectively
🔎 Similar Papers
No similar papers found.