🤖 AI Summary
Traditional legal large language models often fall short of the rigor required in legal practice due to hallucinations, outdated information, and insufficient verifiability. This work systematically examines the evolution of legal large language models toward agent-based architectures and introduces, for the first time, a structured taxonomy of legal agents tailored to the domain. By integrating planning, memory, and tool-use capabilities, the authors propose an agent framework that synergistically combines legal knowledge bases with reasoning mechanisms. Building on this foundation, they devise domain-adapted evaluation methodologies and outline key directions for future research, thereby offering both theoretical grounding and practical guidance for developing reliable, autonomous legal artificial intelligence systems.
📝 Abstract
Large language models (LLMs) have precipitated a dramatic improvement in the legal domain, yet the deployment of standalone models faces significant limitations regarding hallucination, outdated information, and verifiability. Recently, LLM agents have attracted significant attention as a solution to these challenges, utilizing advanced capabilities such as planning, memory, and tool usage to meet the rigorous standards of legal practice. In this paper, we present a comprehensive survey of LLM agents for legal tasks, analyzing how these architectures bridge the gap between technical capabilities and domain-specific needs. Our major contributions include: (1) systematically analyzing the technical transition from standard legal LLMs to legal agents; (2) presenting a structured taxonomy of current agent applications across distinct legal practice areas; (3) discussing evaluation methodologies specifically for agentic performance in law; and (4) identifying open challenges and outlining future directions for developing robust and autonomous legal assistants.