AutoChecklist: Composable Pipelines for Checklist Generation and Scoring with LLM-as-a-Judge

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of a unified, composable framework for generating and evaluating checklists tailored to fine-grained model assessment, which has hindered their application in scenarios such as model alignment and reinforcement learning. The authors propose a modular pipeline architecture—comprising a generator, a refiner, and a scorer—that leverages five abstract prompt templates to enable flexible configuration. This design unifies diverse generation strategies for the first time and supports cross-domain adaptation. The system integrates the LLM-as-a-Judge paradigm and is compatible with multiple backends, including OpenAI, OpenRouter, and vLLM, offering access via API, CLI, and a web interface. Experimental results demonstrate that the generated checklists exhibit strong alignment with human preferences and effectively validate their utility and generalization in real-world settings, such as ICLR peer-review rebuttals.

Technology Category

Application Category

📝 Abstract
Checklists have emerged as a popular approach for interpretable and fine-grained evaluation, particularly with LLM-as-a-Judge. Beyond evaluation, these structured criteria can serve as signals for model alignment, reinforcement learning, and self-correction. To support these use cases, we present AutoChecklist, an open-source library that unifies checklist-based evaluation into composable pipelines. At its core is a taxonomy of five checklist generation abstractions, each encoding a distinct strategy for deriving evaluation criteria. A modular Generator $\rightarrow$ Refiner $\rightarrow$ Scorer pipeline connects any generator with a unified scorer, and new configurations can be registered via prompt templates alone. The library ships with ten built-in pipelines implementing published approaches and supports multiple LLM providers (OpenAI, OpenRouter, vLLM). Beyond the Python API, the library includes a CLI for off-the-shelf evaluation and a web interface for interactive exploration. Validation experiments confirm that these checklist methods significantly align with human preferences and quality ratings, and a case study on ICLR peer review rebuttals demonstrates flexible domain adaptation. AutoChecklist is publicly available at https://github.com/ChicagoHAI/AutoChecklist.
Problem

Research questions and friction points this paper is trying to address.

checklist generation
LLM-as-a-Judge
evaluation criteria
model alignment
composable pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

composable pipelines
checklist generation
LLM-as-a-Judge
modular evaluation
prompt-based configuration
🔎 Similar Papers
No similar papers found.