🤖 AI Summary
LLM-as-a-Judge exhibits insufficient detail coverage and unreliable judgments when evaluating chain-of-thought (CoT) reasoning. Method: We propose a novel crowdsourcing-based comparative evaluation paradigm: (1) leveraging high-quality crowdsourced responses as references to construct contrastive prompts that elicit more comprehensive and in-depth judging reasoning; and (2) introducing “crowdsourced rejection sampling” to improve the efficiency of supervised fine-tuning (SFT). Our approach integrates CoT modeling, contrastive prompt engineering, and rejection sampling. Results: On five major benchmarks, our method achieves an average accuracy gain of 6.7%, significantly enhancing CoT generation quality, judge distillation fidelity, and SFT scalability. Moreover, it reveals—for the first time—a systematic scaling law linking reasoning quality to model size.
📝 Abstract
LLM-as-a-Judge, which generates chain-of-thought (CoT) judgments, has become a widely adopted auto-evaluation method. However, its reliability is compromised by the CoT reasoning's inability to capture comprehensive and deeper details, often leading to incomplete outcomes. Existing methods mainly rely on majority voting or criteria expansion, which is insufficient to address the limitation in CoT. We propose Crowd-based Comparative Evaluation, which introduces additional crowd responses to compare with the candidate responses, thereby exposing deeper and more comprehensive details within the candidate responses. This process effectively guides LLM-as-a-Judge to provide a more detailed CoT judgment. Extensive experiments demonstrate that our approach enhances evaluation reliability, achieving an average accuracy gain of 6.7% across five benchmarks. Moreover, our method produces higher-quality CoTs that facilitate judge distillation and exhibit superior performance in rejection sampling for supervised fine-tuning (SFT), referred to as crowd rejection sampling, thereby enabling more efficient SFT. Our analysis confirms that CoTs generated by ours are more comprehensive and of higher quality, and evaluation accuracy improves as inference scales.