Case Law Grounding: Using Precedents to Align Decision-Making for Humans and AI

📅 2023-10-10
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
How can discrepancies in interpreting abstract, high-order principles (e.g., “constitutional” norms) between humans and AI—particularly in subjective or societally embedded decisions—be mitigated? This paper introduces Case Law Grounding (CLG), the first adaptation of legal precedent-based reasoning to AI alignment and collective decision-making. CLG anchors decisions in historical community cases as empirically grounded social norms, supporting both human review and LLM prompting. It integrates retrieval-augmented prompting, precedent matching, and binding mechanisms, and is evaluated via a human-in-the-loop assessment framework across five real-world online communities. Results demonstrate statistically significant improvements: human decision accuracy increases by 16.0–23.3 percentage points; LLM prompt efficacy improves by 20.8–32.9 percentage points. These findings validate CLG’s feasibility and effectiveness as a context-sensitive, evolvable, and dynamically anchored alignment methodology.
📝 Abstract
Communities and groups often need to make decisions grounded by social norms and preferences, such as when moderating content or providing judgments for aligning AI systems. Prevailing approaches to provide this grounding have primarily centered around constructing high-level guidelines and criteria, similar to legal ``constitutions''. However, it can be challenging to specify social norms and preferences consistently and accurately through constitutions alone. In this work, we take inspiration from legal systems and introduce ``case law grounding'' (CLG) -- a novel approach for grounding decision-making that uses past cases and decisions (precedents) to ground future decisions in a way that can be utilized by human-led processes or implemented through prompting large language models (LLMs). We evaluate how accurately CLG grounds decisions with five groups and communities spread across two decision task domains, comparing against a traditional constitutional grounding approach, and find that in 4 out of 5 groups, decisions produced with CLG were significantly more accurately aligned to ground truth: 16.0--23.3 %-points higher accuracy using the human-led process, and 20.8--32.9 %-points higher when prompting LLMs. We also evaluate the impact of different configurations of CLG, such as the case retrieval window size and whether to enforce binding decisions based on selected precedents, showing support for using binding decisions and preferring larger retrieval windows. Finally, we discuss the limitations of our case-based approach as well as how it may be best used to augment existing constitutional approaches when it comes to aligning human and AI decisions.
Problem

Research questions and friction points this paper is trying to address.

Ensuring consistent subjective decision-making for humans and AI
Addressing inconsistencies in interpreting constitutional grounding rules
Improving decision accuracy using case law grounding approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Case law grounding for decision-making alignment
Human-led and LLM prompting implementation
Binding decisions and larger retrieval windows
🔎 Similar Papers
No similar papers found.