🤖 AI Summary
This study addresses the challenges of trustworthy evaluation of large language models (LLMs) in legal applications, particularly concerning reasoning plausibility, output correctness, and fairness. It provides a systematic review of existing evaluation methodologies, benchmark datasets, and metrics, critically analyzing their applicability and limitations in real-world legal contexts. Departing from conventional approaches, the work innovatively constructs a multidimensional evaluation framework grounded in legal practice, encompassing output correctness, reasoning reliability, and overall trustworthiness. Through comprehensive literature review and taxonomic analysis, the research synthesizes insights on task design, data construction, and metric formulation, thereby establishing a theoretical foundation and strategic roadmap for developing more realistic, reliable, and legally coherent evaluation frameworks tailored to authentic legal tasks.
📝 Abstract
Large language models (LLMs) are being increasingly integrated into legal applications, including judicial decision support, legal practice assistance, and public-facing legal services. While LLMs show strong potential in handling legal knowledge and tasks, their deployment in real-world legal settings raises critical concerns beyond surface-level accuracy, involving the soundness of legal reasoning processes and trustworthy issues such as fairness and reliability. Systematic evaluation of LLM performance in legal tasks has therefore become essential for their responsible adoption. This survey identifies key challenges in evaluating LLMs for legal tasks grounded in real-world legal practice. We analyze the major difficulties involved in assessing LLM performance in the legal domain, including outcome correctness, reasoning reliability, and trustworthiness. Building on these challenges, we review and categorize existing evaluation methods and benchmarks according to their task design, datasets, and evaluation metrics. We further discuss the extent to which current approaches address these challenges, highlight their limitations, and outline future research directions toward more realistic, reliable, and legally grounded evaluation frameworks for LLMs in legal domains.