Legal2LogicICL: Improving Generalization in Transforming Legal Cases to Logical Formulas via Diverse Few-Shot Learning

📅 2026-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization capability in translating legal cases into logical formulas, primarily caused by scarce annotated data. To tackle this challenge, the authors propose Legal2LogicICL, a framework that integrates retrieval-augmented generation with structure-aware few-shot learning. By jointly balancing example diversity and similarity along both semantic and legal-text structural dimensions, the method explicitly mitigates semantic shifts and retrieval biases induced by entity dominance. Notably, it achieves stable logical rule generation without requiring additional model training. Experimental results demonstrate consistent and significant improvements in accuracy, stability, and generalization across both open-source and proprietary large language models. To facilitate further research, the authors also release Legal2Prolog, a new dataset specifically designed for evaluating legal-to-logic translation performance.

Technology Category

Application Category

📝 Abstract
This work aims to improve the generalization of logic-based legal reasoning systems by integrating recent advances in NLP with legal-domain adaptive few-shot learning techniques using LLMs. Existing logic-based legal reasoning pipelines typically rely on fine-tuned models to map natural-language legal cases into logical formulas before forwarding them to a symbolic reasoner. However, such approaches are heavily constrained by the scarcity of high-quality annotated training data. To address this limitation, we propose a novel LLM-based legal reasoning framework that enables effective in-context learning through retrieval-augmented generation. Specifically, we introduce Legal2LogicICL, a few-shot retrieval framework that balances diversity and similarity of exemplars at both the latent semantic representation level and the legal text structure level. In addition, our method explicitly accounts for legal structure by mitigating entity-induced retrieval bias in legal texts, where lengthy and highly specific entity mentions often dominate semantic representations and obscure legally meaningful reasoning patterns. Our Legal2LogicICL constructs informative and robust few-shot demonstrations, leading to accurate and stable logical rule generation without requiring additional training. In addition, we construct a new dataset, named Legal2Proleg, which is annotated with alignments between legal cases and PROLEG logical formulas to support the evaluation of legal semantic parsing. Experimental results on both open-source and proprietary LLMs demonstrate that our approach significantly improves accuracy, stability, and generalization in transforming natural-language legal case descriptions into logical representations, highlighting its effectiveness for interpretable and reliable legal reasoning. Our code is available at https://github.com/yingjie7/Legal2LogicICL.
Problem

Research questions and friction points this paper is trying to address.

legal reasoning
logical formulas
few-shot learning
data scarcity
semantic parsing
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
retrieval-augmented generation
legal reasoning
few-shot learning
logical formula generation
🔎 Similar Papers
No similar papers found.
J
Jieying Xue
Center for Juris-Informatics, ROIS-DS, Tokyo, Japan
P
Phuong Minh Nguyen
Japan Advanced Institute of Science and Technology, Ishikawa, Japan
H
Ha Thanh Nguyen
Center for Juris-Informatics, ROIS-DS, Tokyo, Japan
M
May Myo Zin
Center for Juris-Informatics, ROIS-DS, Tokyo, Japan
Ken Satoh
Ken Satoh
National Institute of Informatics
Artificial Intelligence