CaT-Bench: Benchmarking Language Model Understanding of Causal and Temporal Dependencies in Plans

📅 2024-06-22
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses LLMs’ fundamental limitations in understanding implicit causal–temporal couplings in natural-language planning, particularly in sequential tasks like cooking. To this end, we introduce CaT-Bench—a fine-grained benchmark that systematically evaluates LLMs’ ability to model such couplings for the first time. Methodologically, we formulate a step-order prediction task using a controllably annotated dependency dataset; adopt zero-shot and few-shot prompting; and rigorously validate reasoning consistency via human evaluation. We identify strong temporal heuristic biases in LLMs and find that “explain-after-answer” prompting outperforms standard chain-of-thought. Experiments show that state-of-the-art models achieve only 0.59 zero-shot F1 and 0.73 best few-shot F1; human agreement with their reasoning is low, and answer inconsistency is high. This work exposes critical deficits in LLMs’ plan-based reasoning and establishes a novel, empirically grounded evaluation paradigm for interpretable and robust plan understanding.

Technology Category

Application Category

📝 Abstract
Understanding the abilities of LLMs to reason about natural language plans, such as instructional text and recipes, is critical to reliably using them in decision-making systems. A fundamental aspect of plans is the temporal order in which their steps need to be executed, which reflects the underlying causal dependencies between them. We introduce CaT-Bench, a benchmark of Step Order Prediction questions, which test whether a step must necessarily occur before or after another in cooking recipe plans. We use this to evaluate how well frontier LLMs understand causal and temporal dependencies. We find that SOTA LLMs are underwhelming (best zero-shot is only 0.59 in F1), and are biased towards predicting dependence more often, perhaps relying on temporal order of steps as a heuristic. While prompting for explanations and using few-shot examples improve performance, the best F1 result is only 0.73. Further, human evaluation of explanations along with answer correctness show that, on average, humans do not agree with model reasoning. Surprisingly, we also find that explaining after answering leads to better performance than normal chain-of-thought prompting, and LLM answers are not consistent across questions about the same step pairs. Overall, results show that LLMs’ ability to detect dependence between steps has significant room for improvement.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Sequential and Causal Reasoning
Natural Language Instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

CaT-BENCH
Causal Reasoning
Zero-shot Learning
🔎 Similar Papers
No similar papers found.