🤖 AI Summary
To address the limited legal literacy among the general public—which impedes accessibility and comprehensibility of legal information—this paper proposes CLIC, a framework for democratizing legal knowledge. First, statutory provisions are structurally parsed and rewritten into plain-language CLIC-pages. Second, large language models (e.g., GPT-3) automatically generate high-coverage, high-diversity, low-cost legal question-answer (QA) pairs. Third, a retrieval-augmented, interactive CLIC recommender is built to enable end-to-end mapping from natural-language queries to precise legal knowledge pages. This work presents the first systematic validation of LLMs’ scalability, diversity, and cost-efficiency in legal QA generation: machine-generated QAs achieve threefold coverage relative to human-annotated ones. In real-world deployment, the prototype system significantly improves both accuracy and efficiency of legal knowledge retrieval for non-experts.
📝 Abstract
Access to legal information is fundamental to access to justice. Yet accessibility refers not only to making legal documents available to the public, but also rendering legal information comprehensible to them. A vexing problem in bringing legal information to the public is how to turn formal legal documents such as legislation and judgments, which are often highly technical, to easily navigable and comprehensible knowledge to those without legal education. In this study, we formulate a three-step approach for bringing legal knowledge to laypersons, tackling the issues of navigability and comprehensibility. First, we translate selected sections of the law into snippets (called CLIC-pages), each being a small piece of article that focuses on explaining certain technical legal concept in layperson’s terms. Second, we construct a Legal Question Bank , which is a collection of legal questions whose answers can be found in the CLIC-pages. Third, we design an interactive CLIC Recommender . Given a user’s verbal description of a legal situation that requires a legal solution, CRec interprets the user’s input and shortlists questions from the question bank that are most likely relevant to the given legal situation and recommends their corresponding CLIC pages where relevant legal knowledge can be found. In this paper we focus on the technical aspects of creating an LQB. We show how large-scale pre-trained language models, such as GPT-3, can be used to generate legal questions. We compare machine-generated questions against human-composed questions and find that MGQs are more scalable, cost-effective, and more diversified, while HCQs are more precise. We also show a prototype of CRec and illustrate through an example how our 3-step approach effectively brings relevant legal knowledge to the public.