When Large Language Models Meet Law: Dual-Lens Taxonomy, Technical Advances, and Ethical Governance

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses four core challenges in applying large language models (LLMs) to legal domains: hallucination, poor interpretability, difficulty adapting across jurisdictions, and ethical asymmetry. Methodologically, it introduces the first “legal-role–NLP-task” mapping taxonomy, integrates the Toulmin argumentation model with legal ontologies to establish a dual-perspective legal reasoning framework, and employs an enhanced Transformer architecture—featuring sparse attention and Mixture-of-Experts (MoE)—to enable context-sensitive inference, generative argumentation, and multimodal evidence integration. Key contributions include: (1) a unified technical pathway bridging legal task generalization and formal logical reasoning; (2) workflow-level integration of legal AI components; and (3) open-sourcing of a legal paper indexing repository. Collectively, these advances provide both a conceptual foundation and an implementable paradigm for the technical evolution and responsible governance of legal AI systems.

Technology Category

Application Category

📝 Abstract
This paper establishes the first comprehensive review of Large Language Models (LLMs) applied within the legal domain. It pioneers an innovative dual lens taxonomy that integrates legal reasoning frameworks and professional ontologies to systematically unify historical research and contemporary breakthroughs. Transformer-based LLMs, which exhibit emergent capabilities such as contextual reasoning and generative argumentation, surmount traditional limitations by dynamically capturing legal semantics and unifying evidence reasoning. Significant progress is documented in task generalization, reasoning formalization, workflow integration, and addressing core challenges in text processing, knowledge integration, and evaluation rigor via technical innovations like sparse attention mechanisms and mixture-of-experts architectures. However, widespread adoption of LLM introduces critical challenges: hallucination, explainability deficits, jurisdictional adaptation difficulties, and ethical asymmetry. This review proposes a novel taxonomy that maps legal roles to NLP subtasks and computationally implements the Toulmin argumentation framework, thus systematizing advances in reasoning, retrieval, prediction, and dispute resolution. It identifies key frontiers including low-resource systems, multimodal evidence integration, and dynamic rebuttal handling. Ultimately, this work provides both a technical roadmap for researchers and a conceptual framework for practitioners navigating the algorithmic future, laying a robust foundation for the next era of legal artificial intelligence. We have created a GitHub repository to index the relevant papers: https://github.com/Kilimajaro/LLMs_Meet_Law.
Problem

Research questions and friction points this paper is trying to address.

Systematically unify legal LLM research via dual-lens taxonomy
Address LLM limitations in legal reasoning and semantics
Solve ethical and jurisdictional challenges in legal AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual lens taxonomy integrates legal frameworks and ontologies
Transformer-based LLMs capture legal semantics dynamically
Sparse attention and mixture-of-experts enhance legal tasks
🔎 Similar Papers
No similar papers found.
P
Peizhang Shao
School of Law, China University of Political Science and Law, China and Zhejiang University of Finance and Economics Dongfang College, China
L
Linrui Xu
School of Information Management for Law, China University of Political Science and Law, China and Department of Artificial Intelligence, Chung-Ang University, Republic of Korea
J
Jinxi Wang
School of Law, China University of Political Science and Law, China
W
Wei Zhou
School of Information Management for Law, China University of Political Science and Law, China and School of Law, China University of Political Science and Law, China
Xingyu Wu
Xingyu Wu
Hong Kong Polytechnic University
Automated machine learningCausality-based machine learningLarge foundation modelAutoML