🤖 AI Summary
This work addresses the limited deontic reasoning capabilities of current large language models in high-stakes domains such as law and policy, particularly due to the absence of evaluation benchmarks grounded in long-context, real-world scenarios. The authors introduce SARA, a benchmark comprising 6,232 tasks spanning U.S. federal tax law, airline baggage policies, immigration, and housing regulations. For the first time, real-world rule-intensive tasks are formalized into executable Prolog programs, enabling joint evaluation of free-form chain-of-thought and symbolic reasoning. By integrating prompt engineering, supervised fine-tuning, reinforcement learning, and logic programming, the framework automatically translates natural language rules into verifiable logical programs. Experiments reveal that even state-of-the-art models achieve only 44.4% accuracy on the most challenging SARA Numeric subset and a macro F1 of 46.6 on Housing tasks, underscoring significant limitations in complex deontic reasoning.
📝 Abstract
Reasoning with complex, context-specific rules remains challenging for large language models (LLMs). In legal and policy settings, this manifests as deontic reasoning: reasoning about obligations, permissions, and prohibitions under explicit rules. While many recent benchmarks emphasize short-context mathematical reasoning, fewer focus on long-context, high-stakes deontic reasoning. To address this gap, we introduce DEONTICBENCH, a benchmark of 6,232 tasks across U.S. federal taxes, airline baggage policies, U.S. immigration administration, and U.S. state housing law. These tasks can be approached in multiple ways, including direct reasoning in language or with the aid of symbolic computation. Besides free-form chain-of-thought reasoning, DEONTICBENCH enables an optional solver-based workflow in which models translate statutes and case facts into executable Prolog, leading to formal problem interpretations and an explicit program trace. We release reference Prolog programs for all instances. Across frontier LLMs and coding models, best hard-subset performance reaches only 44.4% on SARA Numeric and 46.6 macro-F1 on Housing. We further study training with supervised fine-tuning and reinforcement learning for symbolic program generation. Although training improves Prolog generation quality, current RL methods still fail to solve these tasks reliably. Overall, DEONTICBENCH provides a benchmark for studying context-grounded rule reasoning in real-world domains under both symbolic and non-symbolic settings.