ACPBench Hard: Unrestrained Reasoning about Action, Change, and Planning

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation for large language models (LLMs) in unconstrained generative planning and reasoning tasks. To this end, we introduce ACPBench Hard—the first open generative benchmark—that elevates traditional Boolean or multiple-choice atomic action and state-change reasoning to free-form question-answering. Our key methodological innovation is a verifiable generative evaluation paradigm: we design task-specific automatic verifiers grounded in formal logical semantics, enabling accurate, quantitative assessment across diverse LLMs (e.g., Llama, Claude, GPT). Experimental results reveal that state-of-the-art models achieve accuracy below 65% on most subtasks, with diminishing returns across scaling axes—indicating a fundamental bottleneck in current LLMs’ capacity for generative planning reasoning. This benchmark and its verification framework establish a rigorous foundation for diagnosing and advancing reasoning capabilities beyond pattern matching toward compositional, logically grounded planning.

Technology Category

Application Category

📝 Abstract
The ACPBench dataset provides atomic reasoning tasks required for efficient planning. The dataset is aimed at distilling the complex plan generation task into separate atomic reasoning tasks in their easiest possible form, boolean or multiple-choice questions, where the model has to choose the right answer from the provided options. While the aim of ACPBench is to test the simplest form of reasoning about action and change, when tasked with planning, a model does not typically have options to choose from and thus the reasoning required for planning dictates an open-ended, generative form for these tasks. To that end, we introduce ACPBench Hard, a generative version of ACPBench, with open-ended questions which the model needs to answer. Models that perform well on these tasks could in principle be integrated into a planner or be used directly as a policy. We discuss the complexity of these tasks as well as the complexity of validating the correctness of their answers and present validation algorithms for each task. Equipped with these validators, we test the performance of a variety of models on our tasks and find that for most of these tasks the performance of even the largest models is still subpar. Our experiments show that no model outperforms another in these tasks and with a few exceptions all tested language models score below 65%, indicating that even the current frontier language models have a long way to go before they can reliably reason about planning. In fact, even the so-called reasoning models struggle with solving these reasoning tasks. ACPBench Hard collection is available at the following link: https://ibm.github.io/ACPBench
Problem

Research questions and friction points this paper is trying to address.

Develops ACPBench Hard for open-ended generative reasoning tasks
Tests model performance on atomic action and change reasoning
Validates correctness of planning-related answers with algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative version of ACPBench for open-ended questions
Validation algorithms for task correctness checking
Testing performance of various models on planning tasks