🤖 AI Summary
Existing legal AI approaches struggle to accurately identify legal contexts, track mappings between factual premises and normative provisions, and model the hierarchical structure of judicial reasoning. Method: This paper constructs the first knowledge graph explicitly modeling the “fact → norm → application” multi-layer reasoning process for legal interpretation, based on 648 Japanese administrative judgments. It introduces a novel synergistic framework integrating prompt engineering, a legal reasoning ontology, and statutory provision normalization to achieve end-to-end, standardized extraction of cited legal provisions from judgment texts. Contribution/Results: Evaluated on expert-annotated data, the method significantly outperforms both standalone large language models and retrieval-augmented baselines on fact-driven statutory retrieval tasks. It achieves the first interpretable, structured representation of judicial reasoning logic—explicitly encoding inferential dependencies and hierarchical legal argumentation.
📝 Abstract
Court judgments reveal how legal rules have been interpreted and applied to facts, providing a foundation for understanding structured legal reasoning. However, existing automated approaches for capturing legal reasoning, including large language models, often fail to identify the relevant legal context, do not accurately trace how facts relate to legal norms, and may misrepresent the layered structure of judicial reasoning. These limitations hinder the ability to capture how courts apply the law to facts in practice. In this paper, we address these challenges by constructing a legal knowledge graph from 648 Japanese administrative court decisions. Our method extracts components of legal reasoning using prompt-based large language models, normalizes references to legal provisions, and links facts, norms, and legal applications through an ontology of legal inference. The resulting graph captures the full structure of legal reasoning as it appears in real court decisions, making implicit reasoning explicit and machine-readable. We evaluate our system using expert annotated data, and find that it achieves more accurate retrieval of relevant legal provisions from facts than large language model baselines and retrieval-augmented methods.